Google Groups no longer supports new Usenet posts or subscriptions. Historical content remains viewable.
Dismiss

design rigor: electronics vs. software

389 views
Skip to first unread message

omni...@gmail.com

unread,
Jan 11, 2020, 12:46:23 AM1/11/20
to
Hardware designs are more rigorously done than
software designs. A large company had problems with a 737
and a rocket to the space station...

https://www.bloomberg.com/news/articles/2019-06-28/boeing-s-737-max-software-outsourced-to-9-an-hour-engineers

I know programmers who do not care for rigor at home at work.
I did hardware design with rigor and featuring reviews by caring
electronics design engineers and marketing engineers.

Software gets sloppy with OOPs.
Object Oriented Programming.
Windows 10 on a rocket to ISS space station.
C++ mud.

Rick C

unread,
Jan 11, 2020, 1:10:56 AM1/11/20
to
I think that is a load. Hardware often fouls up. The two space shuttle disasters were both hardware problems and both were preventable, but there was a clear lack of rigor in the design and execution. The Apollo 13 accident was hardware. The list goes on and on.

Then your very example of the Boeing plane is wrong because no one has said the cause of the accident was improperly coded software.

--

Rick C.

- Get 1,000 miles of free Supercharging
- Tesla referral code - https://ts.la/richard11209

Winfield Hill

unread,
Jan 11, 2020, 8:58:09 AM1/11/20
to
Rick C wrote...
>
> Then your very example of the Boeing plane is wrong
> because no one has said the cause of the accident
> was improperly coded software.

Yes, it was an improper spec, with dangerous reliance
on poor hardware.


--
Thanks,
- Win

jla...@highlandsniptechnology.com

unread,
Jan 11, 2020, 9:47:19 AM1/11/20
to
The easier it is to change things, the less careful people are about
doing them. Software, which includes FPGA code, seldom works the first
time. Almost never. The average hunk of fresh code has a mistake
roughly every 10 lines. Or was that three?

FPGAs are usually better than procedural code, but are still mostly
done as hack-and-fix cycles, with software test benches. When we did
OTP (fuse based) FPGAs without test benching, we often got them right
first try. If compiles took longer, people would be more careful.

PCBs usually work the first time, because they are checked and
reviewed, and that is because mistakes are slow and expensive to fix,
and very visible to everyone. Bridges and buildings are almost always
right the first time. They are even more expensive and slow and
visible.

Besides, electronics and structures have established theory, but
software doesn't. Various people just sort of do it.

My Spice sims are often wrong initially, precisely because there are
basically no consequences to running the first try without much
checking. That is of course dangerous; we don't want to base a
hardware design on a sim that runs and makes pretty graphs but is
fundamentally wrong.





--

John Larkin Highland Technology, Inc

The cork popped merrily, and Lord Peter rose to his feet.

"Bunter", he said, "I give you a toast. The triumph of Instinct over Reason"



jla...@highlandsniptechnology.com

unread,
Jan 11, 2020, 9:58:06 AM1/11/20
to
On 11 Jan 2020 05:57:59 -0800, Winfield Hill <winfie...@yahoo.com>
wrote:
If code kills people, it was improperly coded. Did Boeing's
programmers know nothing about how airplanes work? Just grunted out
lines of code?

DecadentLinux...@decadence.org

unread,
Jan 11, 2020, 10:01:25 AM1/11/20
to
Winfield Hill <winfie...@yahoo.com> wrote in
news:qvck9...@drn.newsguy.com:
Thanks Win. That guy is nuts. Boeing most certainly did announce
just a few months ago, that it was a software fault.

Dork C does this often.

Winfield Hill

unread,
Jan 11, 2020, 10:27:28 AM1/11/20
to
DecadentLinux...@decadence.org wrote...
>
> Winfield Hill wrote:
>
>> Rick C wrote...
>>>
>>> Then your very example of the Boeing plane is wrong
>>> because no one has said the cause of the accident
>>> was improperly coded software.
>>
>> Yes, it was an improper spec, with dangerous reliance
>> on poor hardware.
>
> Thanks Win. That guy is nuts. Boeing most certainly
> did announce just a few months ago, that it was a
> software fault.

That's the opposite of my position. I'm sure the coders
made the software do exactly what they were told to make
it do. It was system engineers and their managers, who
made the decisions and wrote the software specs. They
should not be allowed to simply blame "the software".


--
Thanks,
- Win

upsid...@downunder.com

unread,
Jan 11, 2020, 10:32:50 AM1/11/20
to
On Sat, 11 Jan 2020 06:57:58 -0800, jla...@highlandsniptechnology.com
wrote:

>On 11 Jan 2020 05:57:59 -0800, Winfield Hill <winfie...@yahoo.com>
>wrote:
>
>>Rick C wrote...
>>>
>>> Then your very example of the Boeing plane is wrong
>>> because no one has said the cause of the accident
>>> was improperly coded software.
>>
>> Yes, it was an improper spec, with dangerous reliance
>> on poor hardware.
>
>If code kills people, it was improperly coded. Did Boeing's
>programmers know nothing about how airplanes work? Just grunted out
>lines of code?

GiGo (Garbage In Garbage Out) i.e. a garbage specs given to
programmers will produce garbage code, which is tested against the
(garbage) specs :-)

A competent programmer might ask e.g. about redundancy issues but are
easily turned down by upper management.

DecadentLinux...@decadence.org

unread,
Jan 11, 2020, 10:44:29 AM1/11/20
to
Winfield Hill <winfie...@yahoo.com> wrote in news:qvcpg901bm2
@drn.newsguy.com:
Well, it WAS the finished product that failed, but the true failure
was their ability to ensure proper, robust, failsafe coding.

Altitude and heading maintaining 'auto-pilot' is OK, but full on
willy nilly 'take over' of a major manueverabilty aspect of the
controls is ludicrous.

And the hardware is faulted too. That jackscrew needed a split
threaded sleeve on it so it could be released if a catastrophic
failure occured. Yet one more failure point.

The thing is that assuming control over the elevator of a huge
multi-ton mass speeding through the air not very relatively far from
a catastrophic ground impact is a bad idea to say the least.

jla...@highlandsniptechnology.com

unread,
Jan 11, 2020, 10:49:30 AM1/11/20
to
If people can die from bad specs, the programmers should refuse to
write the code.

Some of the things that the MAX code did were insane.

"We were just doing our job." Where have we heard that before?

DecadentLinux...@decadence.org

unread,
Jan 11, 2020, 10:54:41 AM1/11/20
to
upsid...@downunder.com wrote in
news:aaqj1f52dd6m9j2di...@4ax.com:
You're an idiot. Redundancy is a primary aspect of any mission
critical flight control management.

Can't have a redundant jack screw.

Don't need a redundant atitude sensor. The atitude sensor needs a
mechanical attachment that would allow a pilot to jiggle it to ensure
that it is freely moving and reacting properly to the plane's
position changes. OR use that mechanical conection to manually
adjust a flawed sensor so that the system performing the physical
control at the elevator would get 'good data' from the sensor and the
runaway condition can be recovered from.

They were not given 'garbage specs', so your shithouse programmer
attitude does not even apply.

Winfield Hill

unread,
Jan 11, 2020, 11:06:53 AM1/11/20
to
DecadentLinux...@decadence.org wrote...
>
> Well, it WAS the finished product that failed, but
> the true failure was their ability to ensure proper,
> robust, failsafe coding.

To me your operative word is, proper. I'm sure the
code was robust in doing what it was spec'd to do,
and likely included failsafe coding as well. It was
improper specs that created a non-failsafe system.

No doubt the coding was broken up into pieces, each of
which acted in specied manners for its variable inputs,
and which may well have obscured the overall task.

In fact, the output code that implemented the minor
"augmentation" function may not have been revisited
for changes, after the systems-level decision was
made to expand the use of the augmentation system,
to add anti-stall.


--
Thanks,
- Win

DecadentLinux...@decadence.org

unread,
Jan 11, 2020, 11:09:38 AM1/11/20
to
jla...@highlandsniptechnology.com wrote in
news:uerj1ft73t41ts3r2...@4ax.com:

> On Sat, 11 Jan 2020 17:32:45 +0200, upsid...@downunder.com
> wrote:
>
>>On Sat, 11 Jan 2020 06:57:58 -0800,
>>jla...@highlandsniptechnology.com wrote:
>>
>>>On 11 Jan 2020 05:57:59 -0800, Winfield Hill
>>><winfie...@yahoo.com> wrote:
>>>
>>>>Rick C wrote...
>>>>>
>>>>> Then your very example of the Boeing plane is wrong
>>>>> because no one has said the cause of the accident
>>>>> was improperly coded software.
>>>>
>>>> Yes, it was an improper spec, with dangerous reliance
>>>> on poor hardware.
>>>
>>>If code kills people, it was improperly coded. Did Boeing's
>>>programmers know nothing about how airplanes work? Just grunted
>>>out lines of code?
>>
>>GiGo (Garbage In Garbage Out) i.e. a garbage specs given to
>>programmers will produce garbage code, which is tested against the
>>(garbage) specs :-)
>>
>>A competent programmer might ask e.g. about redundancy issues but
>>are easily turned down by upper management.
>
> If people can die from bad specs, the programmers should refuse to
> write the code.
>
> Some of the things that the MAX code did were insane.
>
> "We were just doing our job." Where have we heard that before?
>
>
Heard it a lot from captured Nazi war criminals.

Lemmie see... Ollie North... Trump's retarded cabinet...

Rick C

unread,
Jan 11, 2020, 11:47:50 AM1/11/20
to
On Saturday, January 11, 2020 at 9:47:19 AM UTC-5, jla...@highlandsniptechnology.com wrote:
> On Fri, 10 Jan 2020 21:46:19 -0800 (PST), omni...@gmail.com wrote:
>
> >Hardware designs are more rigorously done than
> >software designs. A large company had problems with a 737
> >and a rocket to the space station...
> >
> >https://www.bloomberg.com/news/articles/2019-06-28/boeing-s-737-max-software-outsourced-to-9-an-hour-engineers
> >
> >I know programmers who do not care for rigor at home at work.
> >I did hardware design with rigor and featuring reviews by caring
> >electronics design engineers and marketing engineers.
> >
> >Software gets sloppy with OOPs.
> >Object Oriented Programming.
> >Windows 10 on a rocket to ISS space station.
> >C++ mud.
>
> The easier it is to change things, the less careful people are about
> doing them. Software, which includes FPGA code, seldom works the first
> time. Almost never. The average hunk of fresh code has a mistake
> roughly every 10 lines. Or was that three?

There is a very well known rule about software (assuming you include FPGA HDL as software) that says it is much, much easier to find bugs early rather than late. I write code that is tested in simulation, thoroughly, and seldom has bugs when run in an FPGA. Yes, even "seldom" does not preclude bugs, but they are the exception, not the rule as you indicate.

I suggest you have your HDL designers spend more time working on their code and less time debugging it in the chip or worse system, the hardest place to find bugs.

Usually the large number of bugs found in software compared to hardware is due to the fact the software has much, much more to do than the hardware. I would expect that to be pretty obvious to even a casual observer.


> FPGAs are usually better than procedural code, but are still mostly
> done as hack-and-fix cycles, with software test benches. When we did
> OTP (fuse based) FPGAs without test benching, we often got them right
> first try. If compiles took longer, people would be more careful.

When a description of development refers to people working "carefully" it sounds like a very undisciplined effort. I've worked in Mil spec environments where the process was formalized. Then the level and processes of "care" are standardized and consistent.


> PCBs usually work the first time, because they are checked and
> reviewed, and that is because mistakes are slow and expensive to fix,
> and very visible to everyone. Bridges and buildings are almost always
> right the first time. They are even more expensive and slow and
> visible.

PCBs typically work the first time because they are simple, easy to review and simple to specify and analyze. PCBs are the poster child of easy to get right the first time by applying requirements and evaluating to those requirements.


> Besides, electronics and structures have established theory, but
> software doesn't. Various people just sort of do it.

In your organization.


> My Spice sims are often wrong initially, precisely because there are
> basically no consequences to running the first try without much
> checking. That is of course dangerous; we don't want to base a
> hardware design on a sim that runs and makes pretty graphs but is
> fundamentally wrong.

And yet you don't bother to apply rigorous design techniques to any part of your design process preferring to work by massive amounts of inspection and testing. Ok, but that won't work on projects of any real size or complexity. All of your products are fairly simple, single boxes.

Go design a missile some time. Or even a bridge over the Tacoma Narrows.

--

Rick C.

+ Get 1,000 miles of free Supercharging
+ Tesla referral code - https://ts.la/richard11209

Rick C

unread,
Jan 11, 2020, 11:52:50 AM1/11/20
to
On Saturday, January 11, 2020 at 9:58:06 AM UTC-5, jla...@highlandsniptechnology.com wrote:
> On 11 Jan 2020 05:57:59 -0800, Winfield Hill <winfie...@yahoo.com>
> wrote:
>
> >Rick C wrote...
> >>
> >> Then your very example of the Boeing plane is wrong
> >> because no one has said the cause of the accident
> >> was improperly coded software.
> >
> > Yes, it was an improper spec, with dangerous reliance
> > on poor hardware.
>
> If code kills people, it was improperly coded. Did Boeing's
> programmers know nothing about how airplanes work? Just grunted out
> lines of code?

I can assure you that the hands that typed the code knew nothing about how airplanes fly. Neither did the feet that carried the person to the seat where they typed the code or the rear that supported the person as they sat and typed.

Does everyone in your company know how the entire design process works or the products work? Do your board layout people know how the products function? That's why companies have technicians and engineers. Both are needed, but why pay an engineer's salary to get technician work done?

--

Rick C.

-- Get 1,000 miles of free Supercharging
-- Tesla referral code - https://ts.la/richard11209

John S

unread,
Jan 11, 2020, 12:07:32 PM1/11/20
to
On 1/11/2020 10:52 AM, Rick C wrote:
> On Saturday, January 11, 2020 at 9:58:06 AM UTC-5, jla...@highlandsniptechnology.com wrote:
>> On 11 Jan 2020 05:57:59 -0800, Winfield Hill <winfie...@yahoo.com>
>> wrote:
>>
>>> Rick C wrote...
>>>>
>>>> Then your very example of the Boeing plane is wrong
>>>> because no one has said the cause of the accident
>>>> was improperly coded software.
>>>
>>> Yes, it was an improper spec, with dangerous reliance
>>> on poor hardware.
>>
>> If code kills people, it was improperly coded. Did Boeing's
>> programmers know nothing about how airplanes work? Just grunted out
>> lines of code?
>
> I can assure you that the hands that typed the code knew nothing about how airplanes fly. Neither did the feet that carried the person to the seat where they typed the code or the rear that supported the person as they sat and typed.

Please provide the assurance. Your word alone is not enough.

(snip)

Rick C

unread,
Jan 11, 2020, 12:16:05 PM1/11/20
to
Indeed. Shall I call for the hands and have them type for you? Here they are...

"I am the hands that typed in the code for the Boeing 737 MAX MCAS processor. I know nothing about aircraft. I only type what the brain tells me to."

Is that good enough for you?

--

Rick C.

-+ Get 1,000 miles of free Supercharging
-+ Tesla referral code - https://ts.la/richard11209

DecadentLinux...@decadence.org

unread,
Jan 11, 2020, 12:59:10 PM1/11/20
to
Rick C <gnuarm.del...@gmail.com> wrote in
news:cb3bf0ea-0cdc-42d7...@googlegroups.com:

> I can assure you that the hands that typed the code knew nothing
> about how airplanes fly.

Then you know abso-fucking-lutely nothing about aircraft
manufacturers' engineering staff.

DecadentLinux...@decadence.org

unread,
Jan 11, 2020, 1:08:38 PM1/11/20
to
Rick C <gnuarm.del...@gmail.com> wrote in news:cb3bf0ea-0cdc-
42d7-b402-9...@googlegroups.com:

> Do your board layout people know how the products function?

What a stupid question.

I would not hired layout staff that were unable to understand the
board they were laying out. It is REQUIRED, especially if analog
signals are included.

Board layout people? More like PCB design engineers. It's not just
about the schematic and what that engineer put into the circuit. Where
the parts get laid and their traces matter.

It ain't just point a to point b.

John S

unread,
Jan 11, 2020, 4:25:00 PM1/11/20
to
No. Name the author of that comment and give the link to your source.

John Larkin

unread,
Jan 11, 2020, 5:03:43 PM1/11/20
to
I don't think that any of my PCB layout people understood electronics.
They do learn about trace widths and impedances and manufacturing
issues, but I have to get them started on each layout, and I usually
place+route the tricky parts myself.

My three best layout people were women with no engineering background.
The Brat was/is the best, and she majored in softball and beer pong.

She did this one.

https://www.dropbox.com/s/w7ulg68pvni3hpf/Tem_Plus_PCB.JPG?raw=1

I thought the traces from the ADCs up into the FPGA were especially
elegent. I let her pick the BGA balls for best routing.


--

John Larkin Highland Technology, Inc trk

jlarkin att highlandtechnology dott com
http://www.highlandtechnology.com

bitrex

unread,
Jan 11, 2020, 5:31:33 PM1/11/20
to
Don't know why C++ is getting the rap here. Modern C++ design is
rigorous, there are books about what to do and what not to do, and the
language has built-in facilities to ensure that e.g. memory is never
leaked, pointers always refer to an object that exists, and the user
can't ever add feet to meters if they're not supposed to.

If the developer chooses to ignore it all like they always know better
than the people who wrote the books on it, well, God bless...

Embedded software is likely more reliable than ever, believe it or not.
The infotainment system in my Chevy has crashed once in three years,
99.999% reliable. There's probably a million lines of C++ behind the
scenes of that thing. Does Chevy employ the best coders in the world?
Probably not.



Rick C

unread,
Jan 11, 2020, 6:58:05 PM1/11/20
to
That board is duck soup to lay out. It literally would take very little skill to do it. Mostly it is easy because there is so much space to work in all you need to do is route the signals.

Can't really see detail around the BGA, but it looks like either no vias or via in pad. Either way, the traces are fat enough you probably route them to the outer two rows only. That really makes life easy with BGA routing.

I try to avoid BGAs because of the fine design rules it places on the boards. Maybe that's no big deal with some fab houses, but in general going below 5/5 design rules starts to cost more in the board. Also the micro vias run the price up too.

--

Rick C.

+- Get 1,000 miles of free Supercharging
+- Tesla referral code - https://ts.la/richard11209

Rick C

unread,
Jan 11, 2020, 7:00:12 PM1/11/20
to
On Saturday, January 11, 2020 at 5:31:33 PM UTC-5, bitrex wrote:
> The infotainment system in my Chevy has crashed once in three years,
> 99.999% reliable. There's probably a million lines of C++ behind the
> scenes of that thing. Does Chevy employ the best coders in the world?
> Probably not.

Maybe the best in India... I'm just sayin'...

--

Rick C.

++ Get 1,000 miles of free Supercharging
++ Tesla referral code - https://ts.la/richard11209

John Robertson

unread,
Jan 11, 2020, 7:53:32 PM1/11/20
to
On 2020/01/11 2:31 p.m., bitrex wrote:
> On 1/11/20 9:47 AM, jla...@highlandsniptechnology.com wrote:
>> On Fri, 10 Jan 2020 21:46:19 -0800 (PST), omni...@gmail.com wrote:
>>
>>> Hardware designs are more rigorously done than
>>> software designs. A large company had problems with a 737
>>> and a rocket to the space station...
>>>...
>
> Embedded software is likely more reliable than ever, believe it or not.
> The infotainment system in my Chevy has crashed once in three years,
> 99.999% reliable. There's probably a million lines of C++ behind the
> scenes of that thing. Does Chevy employ the best coders in the world?
> Probably not.
>
>
>

If your car crashed once every three years due to software glitches I
don't think you would be as impressed...

John :-#(#

Winfield Hill

unread,
Jan 11, 2020, 8:50:25 PM1/11/20
to
Rick C wrote...
>
> That board is duck soup to lay out.

I dunno, a 176-pin PLCCC and a 256-pin BGA, plus
lots of other critical stuff, that's not so clear.

Anyway, I think John made his point.


--
Thanks,
- Win

Rick C

unread,
Jan 11, 2020, 10:10:01 PM1/11/20
to
On Saturday, January 11, 2020 at 8:50:25 PM UTC-5, Winfield Hill wrote:
> Rick C wrote...
> >
> > That board is duck soup to lay out.
>
> I dunno, a 176-pin PLCCC and a 256-pin BGA, plus
> lots of other critical stuff, that's not so clear.

I don't follow your thinking. The size of the parts aren't important if there is lots of space to run the traces. The 176 pin QFp is trivial really. Notice it only has connections to a couple of dozen pads.

This board was stuffed to the gills with parts on both sides and a very, very challenging layout. The rev 1.1 board was in production and some upgrades were requested. The result was barely able to fit on the board. At one point I was ready to give up and I found a way to better overlap pads on the two sides to free up just enough space to complete the routing.

http://arius.com/images/MS-DCARD-2.0_both.png

That was a hard layout.

If I have to redo the board it will require using a BGA unless one of the new FPGA brands offer a part in an appropriate package. The BGA has many more pins with little advantage since I don't need the large number of I/Os. In fact they would make routing harder given the difficulties of fan out on a BGA. That's why having a lot of board space makes routing a snap.

> Anyway, I think John made his point.

And what was that other than showing his design?

--

Rick C.

--- Get 1,000 miles of free Supercharging
--- Tesla referral code - https://ts.la/richard11209

John Larkin

unread,
Jan 12, 2020, 12:42:54 AM1/12/20
to
On 11 Jan 2020 17:50:07 -0800, Winfield Hill <winfie...@yahoo.com>
wrote:
There are four photodiode time stampers with 6 ps resolution, and
three delay generators with sub-ps resolution. There's a high-speed
SPI-like link to a control computer, and five more to energy
measurement boxes. Lots of controlled-impedance clocks and signals.

Rev A worked perfectly first try. No breadboards, no prototypes, no
cuts or jumpers. 6 layers.

bitrex

unread,
Jan 12, 2020, 3:01:16 AM1/12/20
to
Article is from June of last year. Zero evidence that "outsourced
coders" had anything to do with the 737 Max's fatal problems.

Nope, despite numerous attempts to pin the blame on the backwards
foreigners all the evidence points to the shitheads in question being
the best lily-White American-know-how engineers and managers money could
buy employed at the top levels of the Boeing.

bitrex

unread,
Jan 12, 2020, 3:04:24 AM1/12/20
to
On 1/11/20 7:00 PM, Rick C wrote:
> On Saturday, January 11, 2020 at 5:31:33 PM UTC-5, bitrex wrote:
>> The infotainment system in my Chevy has crashed once in three years,
>> 99.999% reliable. There's probably a million lines of C++ behind the
>> scenes of that thing. Does Chevy employ the best coders in the world?
>> Probably not.
>
> Maybe the best in India... I'm just sayin'...
>

>

Liked article is from June of last year. Zero evidence that "outsourced
coders" had anything to do with the 737 Max's fatal problems.

Nope, despite numerous attempts to pin the blame on the backwards
foreigners all the evidence points to the shitheads in question being
the best lily-White American-know-how engineers and managers money could
buy employed at the top levels of Boeing.

Martin Brown

unread,
Jan 12, 2020, 11:58:45 AM1/12/20
to
On 11/01/2020 14:57, jla...@highlandsniptechnology.com wrote:
> On 11 Jan 2020 05:57:59 -0800, Winfield Hill <winfie...@yahoo.com>
> wrote:
>
>> Rick C wrote...
>>>
>>> Then your very example of the Boeing plane is wrong
>>> because no one has said the cause of the accident
>>> was improperly coded software.
>>
>> Yes, it was an improper spec, with dangerous reliance
>> on poor hardware.
>
> If code kills people, it was improperly coded.

Not necessarily. The code written may well have exactly implemented the
algorithm(s) that the clowns supervised by monkeys specified. It isn't
the job of programmers to double check the workings of the people who do
the detailed calculations of aerodynamic force vectors and torques.

It is not the programmers fault if the systems engineering, failure
analysis and aerodynamics calculations are incorrect in some way!

They knew that the whole design was a rats nest intended to make the
737-max flyable by people with a couple of hours "training" on an iPad.
It was a triumph of marketing might over good engineering practice.

I have never been in the position of coding software that would actually
kill people but I have been put in the position by aggressive salesmen
where meeting a customers specification would require the repeal of one
or more laws of physics. The guys who sell stuff on a wing and a prayer
typically move on fast enough that after pocketing their quadratic over
target sales bonus they are well out of it before the shit hits the fan.

> Did Boeing's
> programmers know nothing about how airplanes work? Just grunted out
> lines of code?

They get a specification which in the strictest terms possible specifies
what it must do in all cases. Aerospace you would expect every possible
path to be fully tested including the seldom travelled worst case error
recovery ones. Boeing used to be fantastically good at this!

Snag is if someone changes the maximum allowed limits from a fairly
reasonable 0.6 degree to a larger 2.5 degrees then all bets are off. The
code would have been fine with the original 0.6 degree adjustment limit
told to the FAA and other international flight safety organisations.

--
Regards,
Martin Brown

Tom Gardner

unread,
Jan 12, 2020, 1:25:50 PM1/12/20
to
On 12/01/20 16:58, Martin Brown wrote:
> I have never been in the position of coding software that would actually kill
> people but I have been put in the position by aggressive salesmen where meeting
> a customers specification would require the repeal of one or more laws of
> physics.

I expect everybody here has seen that.

Useful phrases include "that's great; how did you solve the
Byzantine general's problem?", and similar.


> The guys who sell stuff on a wing and a prayer typically move on fast
> enough that after pocketing their quadratic over target sales bonus they are
> well out of it before the shit hits the fan.

Yup, seen that too, and not just w.r.t. software!

Trying to change the culture so they don't get their
bonus until after customer acceptance (or even
engineering sign off) is an exercise in futility.

Related point: all sales forecasts climb rapidly
after 2 years. No need to guess why.

DecadentLinux...@decadence.org

unread,
Jan 12, 2020, 1:32:34 PM1/12/20
to
Martin Brown <'''newspam'''@nezumi.demon.co.uk> wrote in
news:qvfj7v$fl6$1...@gioia.aioe.org:

> Not necessarily. The code written may well have exactly
> implemented the algorithm(s) that the clowns supervised by monkeys
> specified. It isn't the job of programmers to double check the
> workings of the people who do the detailed calculations of
> aerodynamic force vectors and torques.
>
> It is not the programmers fault if the systems engineering,
> failure analysis and aerodynamics calculations are incorrect in
> some way!

"the programmers" at those levels likely DO have to do some of the
calculations in the crafting of their code.

Shit C coders and "Aerodynamic Engineers with coding acumen" are
two different things.

Phil Hobbs

unread,
Jan 12, 2020, 3:20:48 PM1/12/20
to
On 2020-01-12 11:58, Martin Brown wrote:
> On 11/01/2020 14:57, jla...@highlandsniptechnology.com wrote:
>> On 11 Jan 2020 05:57:59 -0800, Winfield Hill <winfie...@yahoo.com>
>> wrote:
>>
>>> Rick C wrote...
>>>>
>>>> Then your very example of the Boeing plane is wrong
>>>> because no one has said the cause of the accident
>>>> was improperly coded software.
>>>
>>> Yes, it was an improper spec, with dangerous reliance
>>> on poor hardware.
>>
>> If code kills people, it was improperly coded.
>
> Not necessarily. The code written may well have exactly implemented the
> algorithm(s) that the clowns supervised by monkeys specified. It isn't
> the job of programmers to double check the workings of the people who do
> the detailed calculations of aerodynamic force vectors and torques.
>
> It is not the programmers fault if the systems engineering, failure
> analysis and aerodynamics calculations are incorrect in some way!

That's a bit facile, I think. Folks who take an interest in their
professions aren't that easy to confine that way.

Back in my one foray into big-system design, we design engineers were
always getting in the systems guys' faces about various pieces of
stupidity in the specs. It was all pretty good-natured, and we wound up
with the pain and suffering distributed about equally.



Cheers

Phil Hobbs

--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC / Hobbs ElectroOptics
Optics, Electro-optics, Photonics, Analog Electronics
Briarcliff Manor NY 10510

http://electrooptical.net
http://hobbs-eo.com

DecadentLinux...@decadence.org

unread,
Jan 12, 2020, 3:32:06 PM1/12/20
to
Phil Hobbs <pcdhSpamM...@electrooptical.net> wrote in
news:fb4888b5-e96f-1145...@electrooptical.net:

> Back in my one foray into big-system design, we design engineers
> were always getting in the systems guys' faces about various
> pieces of stupidity in the specs. It was all pretty good-natured,
> and we wound up with the pain and suffering distributed about
> equally.
>
>

That is how men get work done... even 'the programmers'.
Very well said, there.

That is like the old dig on 'the hourly help'.

Some programmers are very smart. Others not so much.

I guess choosing to go into it is not such a smart move so they
take a hit from the start. :-)

jjhu...@gmail.com

unread,
Jan 12, 2020, 5:39:03 PM1/12/20
to
If that is how men get work done then they are not using software and system engineering techniques developed in the last 15-20 years and their results are *still* subject to the same types of errors. I do research and teach in this area. A number of studies, and one in particular, cites up to 70% of software faults are introduced on the LHS of the 'V' development model (Other software design lifecycle models have similar fault percentages.) A major issue is that most of these errors are observed at integration time (software+software, software+hardware). The cost of defect removal along the RHS of the 'V' development model is anywhere from 50-200X of the removal cost along the LHS of the 'V'. (no wonder why systems cost so much)
The talk about errors in this thread are very high level and most ppl have the mindset that they are thinking about errors at the unit test level. There are numerous techniques developed to identify and fix fault types thorughout the entire development lifecycle but regrettably a lot of they are not employed. Actually a large percentage of the errors are discovered and fixed at that level. Errors of the type: units mismatch, variable type mismatch, and a slew of concurrency issues aren't discovered till integration time. Usually, at that point, there is a 'rush' to get the system fielded. The horror stories and lessons learned are well documented.
IDK what exactly happened (yet) with the Boeing MAX development. I do have info from some sources that cannot be disclosed at this point. From what I've read, there were major mistakes made from inception through implementation and integration. My personal view, is that one should almost never (never?) place the task on software to correct an inherently unstable airframe design - it is putting a bandaid on the source of the problem. Another major issue is the hazard analysis and fault tolerance approach was not done at the system (the redundancy approach was pitiful, as well as the *logic* used in implementing it as well as conceptual.
I do think that the better software engineers do have a more holistic view of the system (hardware knowledge + system operational knowledge) which will allow them to ask questions when things don't 'seem right.' OTHO, the software engineers should not go making assumptions about things and coding to those assumptions. (It happens more than you think) It is the job of the software architect to ensure that any development assumptions are captured and specified in the software architecture.
In studies I have looked at, the percentage of requirements errors is somewhere between 30-40% of the overall number of faults during the design lifecycle, and the 'industry standard' approach approach to dealing with this problem is woefully indequate despite techniques to detect and remove the errors. A LOT Of time is spent doing software requirements tracing as opposed to doing verification of requirements. People argue that one cannot verify the requirements until the system has been built - which is complete BS but industry is very slow to change. We have shown that using software architecture modeling addresses a large percentage of system level problems early in the design life cycle. We are trying to convince industry. Until change happens, the parade of failures like the MAX will continue.

Phil Hobbs

unread,
Jan 12, 2020, 5:55:08 PM1/12/20
to
On 2020-01-12 17:38, jjhu...@gmail.com wrote:
> On Sunday, January 12, 2020 at 3:32:06 PM UTC-5,
Nice rant. Could you tell us more about the 'V' model?

> The talk about errors in this thread are very high level and most
> ppl have the mindset that they are thinking about errors at the unit
> test level. There are numerous techniques developed to identify and
> fix fault types throughout the entire development lifecycle but
> regrettably a lot of them are not employed.

What sorts of techniques to you use to find problems in the specifications?
> Actually a large percentage of the errors are discovered and fixed at
> that level. Errors of the type: units mismatch, variable type
> mismatch, and a slew of concurrency issues aren't discovered till
> integration time. Usually, at that point, there is a 'rush' to get
> the system fielded. The horror stories and lessons learned are well
> documented.

Yup. Leaving too much stuff for the system integration step is a very
very well-known way to fail.

> IDK what exactly happened (yet) with the Boeing MAX development. I
> do have info from some sources that cannot be disclosed at this
> point. From what I've read, there were major mistakes made from
> inception through implementation and integration. My personal view,
> is that one should almost never (never?) place the task on software
> to correct an inherently unstable airframe design - it is putting a
> bandaid on the source of the problem.

It's commonly done, though, isn't it? I remember reading Ben Rich's
book on the Skunk Works, where he says that the F-117's very squirrelly
handling characteristics were fixed up in software to make it a
beautiful plane to fly. That was about 1980.

> Another major issue is the hazard analysis and fault tolerance
> approach was not done at the system (the redundancy approach was
> pitiful, as well as the *logic* used in implementing it as well as
> conceptual.

> I do think that the better software engineers do have a more
> holistic view of the system (hardware knowledge + system operational
> knowledge) which will allow them to ask questions when things don't
> 'seem right.' OTHO, the software engineers should not go making
> assumptions about things and coding to those assumptions. (It
> happens more than you think) It is the job of the software architect
> to ensure that any development assumptions are captured and specified
> in the software architecture.

In real life, though, it's super important to have two-way
communications during development, no? My large-system experience was
all hardware (the first civilian satellite DBS system, 1981-83), so
things were quite a bit simpler than in a large software-intensive
system. I'd expect the need for bottom-up communication to be greater
now rather than less.

> In studies I have looked at, the percentage of requirements errors
> is somewhere between 30-40% of the overall number of faults during
> the design lifecycle, and the 'industry standard' approach approach
> to dealing with this problem is woefully indequate despite techniques
> to detect and remove the errors. A LOT Of time is spent doing
> software requirements tracing as opposed to doing verification of
> requirements. People argue that one cannot verify the requirements
> until the system has been built - which is complete BS but industry
> is very slow to change. We have shown that using software
> architecture modeling addresses a large percentage of system level
> problems early in the design life cycle. We are trying to convince
> industry. Until change happens, the parade of failures like the
> MAX will continue.

I'd love to hear more about that.

George Herold

unread,
Jan 12, 2020, 6:10:47 PM1/12/20
to
On Sunday, January 12, 2020 at 12:42:54 AM UTC-5, John Larkin wrote:
> On 11 Jan 2020 17:50:07 -0800, Winfield Hill <winfie...@yahoo.com>
> wrote:
>
> >Rick C wrote...
> >>
> >> That board is duck soup to lay out.
> >
> > I dunno, a 176-pin PLCCC and a 256-pin BGA, plus
> > lots of other critical stuff, that's not so clear.
> >
> > Anyway, I think John made his point.
>
> There are four photodiode time stampers with 6 ps resolution, and
> three delay generators with sub-ps resolution. There's a high-speed
> SPI-like link to a control computer, and five more to energy
> measurement boxes. Lots of controlled-impedance clocks and signals.
Wow! That sound like quite a box. Four inputs? What's the dead time
on a channel? More or less than $10k?

George H.

Klaus Kragelund

unread,
Jan 12, 2020, 6:34:48 PM1/12/20
to
I guess he's referring to this one:

https://am7s.com/what-is-v-model7-model-systems-engineering/

We use it at work, or actually used to use it. Now we are transitioning to agile methods, since V model is really rigid and respons poorly to changes during development. In particular SW can benefit a lot from agile mindset, and making automatic test that has high test coverage

Cheers

Klaus

Rick C

unread,
Jan 12, 2020, 6:39:00 PM1/12/20
to
On Sunday, January 12, 2020 at 5:39:03 PM UTC-5, jjhu...@gmail.com wrote:
>
> If that is how men get work done then they are not using software and system engineering techniques developed in the last 15-20 years and their results are *still* subject to the same types of errors. I do research and teach in this area. A number of studies, and one in particular, cites up to 70% of software faults are introduced on the LHS of the 'V' development model (Other software design lifecycle models have similar fault percentages.) A major issue is that most of these errors are observed at integration time (software+software, software+hardware). The cost of defect removal along the RHS of the 'V' development model is anywhere from 50-200X of the removal cost along the LHS of the 'V'. (no wonder why systems cost so much)

That reminds me of a fact of designing FPGAs that surprised me when I realized it. We go to great lengths to assure the proper design of the code that goes into logic devices. But an equally important part is the timing of the logic paths. We have constraints that we use to specify the timing requirements which are then used to test the speed of the resulting logic in analysis. However, we don't have a way to verify that the constraints are specifying what we intended. So any logic design can potentially be a failure due to improper timing constraints which can not be tested or verified to be correct.

Go figure!

--

Rick C.

- Get 1,000 miles of free Supercharging

Clifford Heath

unread,
Jan 12, 2020, 6:47:16 PM1/12/20
to
On 13/1/20 9:55 am, Phil Hobbs wrote:
> On 2020-01-12 17:38, jjhu...@gmail.com wrote:
>> The cost of defect removal
>> along the RHS of the 'V' development model is anywhere from 50-200X
>> of the removal cost along the LHS of the 'V'. (no wonder why systems
>> cost so much)
>
> Nice rant.  Could you tell us more about the 'V' model?
>
>> The talk about errors in this thread are very high level and most
>> ppl have the mindset that they are thinking about errors at the unit
>> test level. There are numerous techniques developed to identify and
>> fix fault types throughout the  entire development lifecycle but
>> regrettably a lot of them are not employed.
>
> What sorts of techniques to you use to find problems in the specifications?

See below for pointers to John Hudak's and SEI's work in this area.

There is a number of other approaches that I don't see covered in their
work too.

<https://lamport.azurewebsites.net/tla/tla.html> is one.
<http://factbasedmodeling.org/> is another.

All work on different aspects of verification, but basically they aim to
express (model) the problem in different ways to allow it to be
inspected and tested with hypothesized situations to find anomalies.

FBM looks for static anomalies (a model which allows any situation that
makes no sense). TLA looks for behavioural anomalies (sequence of
actions which could violate a system constraint).
AADL looks for performance/real-time anomalies.

>> It is the job of the software architect
>> to ensure that any development assumptions are captured and specified
>> in the software architecture.
>
> In real life, though, it's super important to have two-way
> communications during development, no?  My large-system experience was
> all hardware (the first civilian satellite DBS system, 1981-83), so
> things were quite a bit simpler than in a large software-intensive
> system.  I'd expect the need for bottom-up communication to be greater
> now rather than less.

The biggest difficulty with bottom-up communication is that the folk "at
the bottom" work with highly technical or formal artefacts, and feel the
need to communicate in the same way - but the folk who need to
understand what is being said simply don't understand what is being
said, and being frequently more senior, don't want to admit their lack
of understanding.

There is a deep gulf between requirements specification and
implementation. Folk in implementation use their formal methods training
to spot logical errors in the specifications, and assume that the reason
is that the requirements folk simply don't know what they want.
Sometimes they're right, but more often, they simply don't have a
sufficiently precise language to express it.

The gulf can be crossed - but only by formal languages that can be
expressed in understandable ways.

Building tools to cross this language<->logic gulf using so-called
"fact-based modeling" has been the focus of my last 12 years of research.

>> In studies I have looked at, the percentage of requirements errors
>> is somewhere between 30-40% of the overall number of faults during
>> the design lifecycle, and the 'industry standard' approach approach
>> to dealing with this problem is woefully indequate despite techniques
>> to detect and remove the errors.  A LOT Of time is spent doing
>> software requirements tracing as opposed to doing verification of
>> requirements.  People argue that one cannot verify the requirements
>> until the system has been built - which is complete BS but industry is
>> very slow to change. We have shown that using software architecture
>> modeling addresses a large percentage of system level problems early
>> in the design life cycle.  We are trying to convince industry.   Until
>> change happens, the parade of failures like the
>> MAX will continue.
>
> I'd love to hear more about that.

The Software Engineering Institute at CMU (where John Hudak works) is
one of the foremost (but by no means the only) eminent body working in
this space - nor is their approach the only one that has made
significant inroads into this class of problem.

<https://resources.sei.cmu.edu/asset_files/TechnicalNote/2006_004_001_14678.pdf>
<http://www.openaadl.org/>

Clifford Heath

jjhu...@gmail.com

unread,
Jan 12, 2020, 7:13:27 PM1/12/20
to
Sorry - I get a bit carried away on this topic...
For requirements engineering verification one can google: formal and semi-formal requirements specification languages. RDAL and ReqSpec are ones I am familiar with.
Techniques to verify requirements include model checking. Google model checking. Based of formal logic like LTL (Linear temporal logic) CTL (Compositional Tree Logic. One constructs state models from requirements and uses model checking engines to analyze the structures. Model checking was actually used to verify a bus protocol in the early 90s and found *lots* of problems with the spec...that caused industry to 'wake up'.
There are others that work on code, but these are very much research-y efforts.

Simulink has a model checker in its toolboxes (based on Promala) it is quite good).

We advocate using architecture design languages (ADL's) that is a formal modeling notation to model different views of the architecture and capture properties of the system from which analysis can be done (e.g. signal latency, variable format and property consistency, processor utilization, bandwidth capacity, hazard analysis, etc.) The one that I had a hand in designing is Architecture Analysis and Design Language (AADL) It is an SAE Aerospace standard. IF things turn out well, it will be used on the next generation of helecopters for the army. We have been piloting it use on real systems for the last 2-3 years, and last 10 years on pilot studies.
For systems hazard analysis, google STPA (System Theoretic Process Approach) spearheaded by Nancy Leveson MIT (She has consulted to Boeing).

Yes, I've seen software applied to fix hw problems but assessing the risk is complicated. The results can be catastrophic.
Ok, off my rant....

jjhu...@gmail.com

unread,
Jan 12, 2020, 7:23:19 PM1/12/20
to
On Sunday, January 12, 2020 at 5:55:08 PM UTC-5, Phil Hobbs wrote:
I forgot to add that the act of building a formal model in AADL from the requirements forces one to *think* about system wide impacts and do analysis on the architectural model.
Requirements are written in English, one of the most widely used tool is MSWord.
another is DOORs

Phil Hobbs

unread,
Jan 12, 2020, 7:33:40 PM1/12/20
to
Thanks. I feel a bit like I'm drinking from a fire hose, which is
always my preferred way of learning stuff.... I'd be super interested
in an accessible presentation of methods for sanity-checkin high-level
system requirements.

Being constitutionally lazy, I'm a huge fan of ways to work smarter
rather than harder. ;)

John Larkin

unread,
Jan 12, 2020, 8:04:56 PM1/12/20
to
On Sun, 12 Jan 2020 15:10:43 -0800 (PST), George Herold
<gghe...@gmail.com> wrote:

>On Sunday, January 12, 2020 at 12:42:54 AM UTC-5, John Larkin wrote:
>> On 11 Jan 2020 17:50:07 -0800, Winfield Hill <winfie...@yahoo.com>
>> wrote:
>>
>> >Rick C wrote...
>> >>
>> >> That board is duck soup to lay out.
>> >
>> > I dunno, a 176-pin PLCCC and a 256-pin BGA, plus
>> > lots of other critical stuff, that's not so clear.
>> >
>> > Anyway, I think John made his point.
>>
>> There are four photodiode time stampers with 6 ps resolution, and
>> three delay generators with sub-ps resolution. There's a high-speed
>> SPI-like link to a control computer, and five more to energy
>> measurement boxes. Lots of controlled-impedance clocks and signals.
>Wow! That sound like quite a box. Four inputs? What's the dead time
>on a channel? More or less than $10k?
>
>George H.

It's a controller for a deep-UV MOPA laser, for IC lithography. The
pulse rate is about 6 KHz. Way less than $10K.

John Larkin

unread,
Jan 12, 2020, 8:07:24 PM1/12/20
to
On Sun, 12 Jan 2020 16:58:40 +0000, Martin Brown
<'''newspam'''@nezumi.demon.co.uk> wrote:

>On 11/01/2020 14:57, jla...@highlandsniptechnology.com wrote:
>> On 11 Jan 2020 05:57:59 -0800, Winfield Hill <winfie...@yahoo.com>
>> wrote:
>>
>>> Rick C wrote...
>>>>
>>>> Then your very example of the Boeing plane is wrong
>>>> because no one has said the cause of the accident
>>>> was improperly coded software.
>>>
>>> Yes, it was an improper spec, with dangerous reliance
>>> on poor hardware.
>>
>> If code kills people, it was improperly coded.
>
>Not necessarily. The code written may well have exactly implemented the
>algorithm(s) that the clowns supervised by monkeys specified. It isn't
>the job of programmers to double check the workings of the people who do
>the detailed calculations of aerodynamic force vectors and torques.
>
>It is not the programmers fault if the systems engineering, failure
>analysis and aerodynamics calculations are incorrect in some way!

The management of two AOA sensors was insane. Fatal, actually. A
programmer should understand simple stuff like that.

Tom Gardner

unread,
Jan 13, 2020, 4:04:25 AM1/13/20
to
On 13/01/20 01:07, John Larkin wrote:
> On Sun, 12 Jan 2020 16:58:40 +0000, Martin Brown
> <'''newspam'''@nezumi.demon.co.uk> wrote:
>
>> On 11/01/2020 14:57, jla...@highlandsniptechnology.com wrote:
>>> On 11 Jan 2020 05:57:59 -0800, Winfield Hill <winfie...@yahoo.com>
>>> wrote:
>>>
>>>> Rick C wrote...
>>>>>
>>>>> Then your very example of the Boeing plane is wrong
>>>>> because no one has said the cause of the accident
>>>>> was improperly coded software.
>>>>
>>>> Yes, it was an improper spec, with dangerous reliance
>>>> on poor hardware.
>>>
>>> If code kills people, it was improperly coded.
>>
>> Not necessarily. The code written may well have exactly implemented the
>> algorithm(s) that the clowns supervised by monkeys specified. It isn't
>> the job of programmers to double check the workings of the people who do
>> the detailed calculations of aerodynamic force vectors and torques.
>>
>> It is not the programmers fault if the systems engineering, failure
>> analysis and aerodynamics calculations are incorrect in some way!
>
> The management of two AOA sensors was insane. Fatal, actually. A
> programmer should understand simple stuff like that.

It is unrealistic to expect programmers to understand sensor
reliability. That is the job of the people specifying the
system design and encoding that in the system specification
and the software specification.

Programmers would have zero ability to deviate from implementing
the software spec, full stop. If they did knowingly deviate, it
would be a career ending decision - at best.

Aerospace engineers have lost their pension for far less
serious deviations, even though they had zero consequences.


DecadentLinux...@decadence.org

unread,
Jan 13, 2020, 4:27:17 AM1/13/20
to
Tom Gardner <spam...@blueyonder.co.uk> wrote in news:ooWSF.30854
$Bf2....@fx39.am4:

> It is unrealistic to expect programmers to understand sensor
> reliability. That is the job of the people specifying the
> system design and encoding that in the system specification
> and the software specification.

I think it would be nice to have a full understanding of ANY
failure modes ANY transducer I would be programming the actions of
others from would have in its operation. So I would at least want to
be at those meetings. ;-)

So, in the 737 max scenario, I would want to know about it (atitude
sensor) sticking from icing up. As far as I know, the actual encoding
in them is a simple slot mask on a disc (optical encoder wheel),
which can resolve to a couple ticks per degree with ease, more if a
higher resolution were needed.

I would place two wheels on each and a 'kicker' device that turns
it through its full travel and then releases it for reading again
(and maybe a heater for the bearings). That way it could be checked
for failed/free operation while in flight.

RBlack

unread,
Jan 13, 2020, 4:27:25 AM1/13/20
to
In article <d0nj1f50mabot5tnf...@4ax.com>,
jla...@highlandsniptechnology.com says...
>
[snip]
>
> My Spice sims are often wrong initially, precisely because there are
> basically no consequences to running the first try without much
> checking. That is of course dangerous; we don't want to base a
> hardware design on a sim that runs and makes pretty graphs but is
> fundamentally wrong.

I just got bitten by a 'feature' of LTSpice XVII, I don't remeber IV
having this behaviour but I don't have it installed any more:

If you make a tweak to a previously working circuit, which makes the
netlister fail (in my case it was an inductor shorted to ground at both
ends), it will pop up a warning to this effect, and then *run the sim
using the old netlist*.

It will then allow you to probe around on the new schematic, but the
schematic nodes are mapped onto the old netlist, so depending on what
you tweaked, what is displayed can range from slightly wrong to flat-out
impossible.

Anyone else seen this?

Phil Hobbs

unread,
Jan 13, 2020, 9:01:13 AM1/13/20
to
Gee, Mr. Gardner, you're so manly--can I have your autograph? ;)

Nobody's talking about coders doing jazz on the spec AFAICT. Systems
folks do need to listen to them, is all. If they can't do that because
they don't understand the issues, that's a serious organizational
problem, on a level with the flawed spec.

> Aerospace engineers have lost their pension for far less
> serious deviations, even though they had zero consequences.

Fortunately that's illegal over here, even for cause.

John Larkin

unread,
Jan 13, 2020, 10:45:36 AM1/13/20
to
Job ending, not career ending. I wouldn't code something that was
obviously dumb and dangerous.

If someone quits Boeing over an issue like this, it doesn't end their
career. They can find a better employer.

If an interviewer asked "why did you leave Boeing?" I'd tell them.

>
>Aerospace engineers have lost their pension for far less
>serious deviations, even though they had zero consequences.
>

How can a company take away an earned pension? Because an engineer did
something ethical? Sounds like a giant settlement would follow;
quadruple that pension.

John Larkin

unread,
Jan 13, 2020, 10:58:50 AM1/13/20
to
On Mon, 13 Jan 2020 09:04:20 +0000, Tom Gardner
<spam...@blueyonder.co.uk> wrote:

https://philip.greenspun.com/blog/2019/03/21/optional-angle-of-attack-sensors-on-the-boeing-737-max/

Given dual sensors, why would any sane person decide to alternate
using one per flight?

A programmer would have to be awfully thick to not object to that.

Tom Gardner

unread,
Jan 13, 2020, 11:23:01 AM1/13/20
to
Well, by all accounts there were/are serious organisational
problems in Boeing. Those are probably a significant
contributor to there being a flawed spec.


>> Aerospace engineers have lost their pension for far less
>> serious deviations, even though they had zero consequences.
>
> Fortunately that's illegal over here, even for cause.

I was gobsmacked when I heard that, and don't understand it.
But then I don't even understand the concept of pension
"vesting".

Nonetheless, that's what his supervisor (who did his utmost
to save him) in Los Angeles said.

Tom Gardner

unread,
Jan 13, 2020, 11:35:30 AM1/13/20
to
Agreed, but resigning is very different to deliberately
mis-implementing a spec.

In such circumstances I hope I would resign, but there have
been time in my life when that would have been impossible.



>> Aerospace engineers have lost their pension for far less
>> serious deviations, even though they had zero consequences.
>>
>
> How can a company take away an earned pension? Because an engineer did
> something ethical? Sounds like a giant settlement would follow;
> quadruple that pension.

I don't understand that either.

As I remember it, the conscientious worker was placed
under time pressure. Signoff required a signature and
his personal official stamp. He signed in advance of
completing the work, but did not affix his stamp.

A passing body saw that document, reported it, and
the process ground inexorably from that.

Grossly disproportionate an unfair? You betcha, but
so what.

Tom Gardner

unread,
Jan 13, 2020, 11:40:59 AM1/13/20
to
Agreed. Especially given the poor reliability of AoA sensors.

The people that write and signed off that spec
bear a lot of responsibility


> A programmer would have to be awfully thick to not object to that.

The programmer's job is to implement the spec, not to write it

They may have objected, and may have been overruled.

Have you worked in large software organisations?

John Larkin

unread,
Jan 13, 2020, 12:41:16 PM1/13/20
to
On Mon, 13 Jan 2020 16:40:55 +0000, Tom Gardner
Not in, but with. Most "just do our jobs", which means that they don't
care to learn much about the process that they are implementing.

And the hardware guys don't have much insight or visibility into the
software. Often, not much control either, in a large organization
where things are very firewalled.

Recipe for disaster.


--

John Larkin Highland Technology, Inc trk

The cork popped merrily, and Lord Peter rose to his feet.
"Bunter", he said, "I give you a toast. The triumph of Instinct over Reason"

Rick C

unread,
Jan 13, 2020, 12:49:15 PM1/13/20
to
Somewhat less significant, I was doing a bus timing analysis of an interface between a new board and existing boards in a new radio which was not yet in full production. I found a small timing spec miss with a Flash memory part. I tried to report it to the lead engineer but the response I got was "the unit has passed acceptance testing", as if that meant there were no errors in the radio.

I had been around and around with the company's hostile work environment and employees working around the system rather than doing what needed to be done. I let the matter drop. Not that it was particularly likely to cause a problem in the radio... but that was certainly a possibility, even if very, very small. These were military radios and everyone stressed how important it was that they work under all conditions. But the company had worn me down...

Companies suck. I won't be an employee again.

--

Rick C.

--+ Get 1,000 miles of free Supercharging
--+ Tesla referral code - https://ts.la/richard11209

John Larkin

unread,
Jan 13, 2020, 12:58:44 PM1/13/20
to
On Sun, 12 Jan 2020 15:10:43 -0800 (PST), George Herold
<gghe...@gmail.com> wrote:

>On Sunday, January 12, 2020 at 12:42:54 AM UTC-5, John Larkin wrote:
>> On 11 Jan 2020 17:50:07 -0800, Winfield Hill <winfie...@yahoo.com>
>> wrote:
>>
>> >Rick C wrote...
>> >>
>> >> That board is duck soup to lay out.
>> >
>> > I dunno, a 176-pin PLCCC and a 256-pin BGA, plus
>> > lots of other critical stuff, that's not so clear.
>> >
>> > Anyway, I think John made his point.
>>
>> There are four photodiode time stampers with 6 ps resolution, and
>> three delay generators with sub-ps resolution. There's a high-speed
>> SPI-like link to a control computer, and five more to energy
>> measurement boxes. Lots of controlled-impedance clocks and signals.
>Wow! That sound like quite a box. Four inputs? What's the dead time
>on a channel? More or less than $10k?
>
>George H.

This is a controller for a MOPA deep-UV laser, for IC lithography. The
pules rate is about 6 KHz.


--

John Larkin Highland Technology, Inc trk

Rick C

unread,
Jan 13, 2020, 1:14:48 PM1/13/20
to
Not really an issue of firewalls. Your company does the same thing as we have pointed out. The 'Brat' doesn't look over your shoulder when you design the circuits, she just routes the board the way you tell her. That's not a firewall. That's delegation of responsibility. Same in larger companies.

Unlike many small companies, large ones have changed significantly over the decades. While many small companies are run by one or a small number of autocrats, large companies set u formal processes to make important decisions. The design process virtually always includes peer review. They try hard to not make mistakes, even small ones.

But there are many opportunities to make mistakes and we don't always avoid every one of them. Sometimes we push the boundaries and find our iceberg.

--

Rick C.

-+- Get 1,000 miles of free Supercharging
-+- Tesla referral code - https://ts.la/richard11209

Tom Gardner

unread,
Jan 13, 2020, 2:26:31 PM1/13/20
to
Seen that, and it even occurs within software world:
-analysts lob spec over wall to developers
-developers lob code over wall to testers
-developers lob tested code over wall to operations
-rinse and repeat, slowly

"Devops" tries to avoid that inefficiency.


> And the hardware guys don't have much insight or visibility into the
> software. Often, not much control either, in a large organization
> where things are very firewalled.

I've turned down job offers where the HR droids couldn't
deal with someone that successfully straddles both
hardware and software worlds.

> Recipe for disaster.

Yup, as we've seen.

mpm

unread,
Jan 13, 2020, 7:35:20 PM1/13/20
to
On Saturday, January 11, 2020 at 1:10:56 AM UTC-5, Rick C wrote:
> I think that is a load. Hardware often fouls up. The two space shuttle disasters were both hardware problems and both were preventable, but there was a clear lack of rigor in the design and execution. The Apollo 13 accident was hardware. The list goes on and on.
>
> Then your very example of the Boeing plane is wrong because no one has said the cause of the accident was improperly coded software.

Technically, one of those shuttle disasters was due to management not listening to their engineers, including those at Morton-Thiokol, that the booster rocket O-Rings were unsafe to launch at cold temperature.

I don't consider that to be a "hardware problem" so much as an arrogantly stupid decision to launch under known, unsafe conditions.

As for the tiles (2nd shuttle loss), I am weirdly reminded of the Siegfried & Roy Vegas act with the white lions and tigers. They insured against every conceivable possibility (including the performance animals jumping into the crowd and causing a panic!). Everything that is, except the tiger viciously attacking Roy Horn on-stage.

You think you could see that coming..., or at least have a plan (however remote the possibility)?

With the shuttle heat tiles, NASA had to replace a lot of those after every flight. Did they never see the tiger?

mpm

unread,
Jan 13, 2020, 7:39:16 PM1/13/20
to
On Saturday, January 11, 2020 at 11:06:53 AM UTC-5, Winfield Hill wrote:
> DecadentLinux...@decadence.org wrote...
> >
> > Well, it WAS the finished product that failed, but
> > the true failure was their ability to ensure proper,
> > robust, failsafe coding.
>
> To me your operative word is, proper. I'm sure the
> code was robust in doing what it was spec'd to do,
> and likely included failsafe coding as well. It was
> improper specs that created a non-failsafe system.
>
> No doubt the coding was broken up into pieces, each of
> which acted in specied manners for its variable inputs,
> and which may well have obscured the overall task.
>
> In fact, the output code that implemented the minor
> "augmentation" function may not have been revisited
> for changes, after the systems-level decision was
> made to expand the use of the augmentation system,
> to add anti-stall.
>
>
> --
> Thanks,
> - Win

Some code is so complicated that it can not be adequately tested.
Ten years ago I read an article about how some Canadian warships were designed to "re-route" critical systems after sustaining battle damage, by using whatever hardware was then available.

A daunting task, for sure.

Just developing a test plan for something like that is amazingly complex.

Phil Hobbs

unread,
Jan 13, 2020, 8:36:08 PM1/13/20
to
The company's contributions towards your pension are part of your
compensation year by year. Taking that away is no different from trying
to claw back 20 years worth of salary.

Cheers

Phil Hobbs

Phil Hobbs

unread,
Jan 13, 2020, 8:43:28 PM1/13/20
to
On 2020-01-13 19:35, mpm wrote:
> On Saturday, January 11, 2020 at 1:10:56 AM UTC-5, Rick C wrote:
>> I think that is a load. Hardware often fouls up. The two space shuttle disasters were both hardware problems and both were preventable, but there was a clear lack of rigor in the design and execution. The Apollo 13 accident was hardware. The list goes on and on.
>>
>> Then your very example of the Boeing plane is wrong because no one has said the cause of the accident was improperly coded software.
>
> Technically, one of those shuttle disasters was due to management not listening to their engineers, including those at Morton-Thiokol, that the booster rocket O-Rings were unsafe to launch at cold temperature.
>
> I don't consider that to be a "hardware problem" so much as an arrogantly stupid decision to launch under known, unsafe conditions.

Diane Vaughan's "The Challenger Launch Decision" is an amazingly good
read on how they got to that point. She's a sociologist, of course, but
she took great pains to understand the culture and the issues, which led
her to completely re-evaluate her initial cultural-Marxist take on it.

She has my complete respect for her willingness to follow where the
facts led--a rare and valuable trait in our diminished, ideology-driven
days.

jla...@highlandsniptechnology.com

unread,
Jan 13, 2020, 9:43:45 PM1/13/20
to
On Mon, 13 Jan 2020 19:26:26 +0000, Tom Gardner
I interviewed with HP once. The guy looked at my resume and said "The
first thing you need to do is decide whether you're an engineer or a
programmer", so I walked out.

One big company that we work with has, I've heard, 12 levels of
engineering management. If an EE group wants a hole drilled in a
chassis, the request has to propagate up 5 or six management levels,
and then back down, to get to a mechanical engineer. Software is
similarly insulated. Any change fires off an enormous volume of
paperwork and customer "copy exact" notices, so most things just never
get done.


>
>> Recipe for disaster.
>
>Yup, as we've seen.


--

John Larkin Highland Technology, Inc

jla...@highlandsniptechnology.com

unread,
Jan 13, 2020, 9:46:30 PM1/13/20
to
On 11 Jan 2020 07:27:05 -0800, Winfield Hill <winfie...@yahoo.com>
wrote:

>DecadentLinux...@decadence.org wrote...
>>
>> Winfield Hill wrote:
>>
>>> Rick C wrote...
>>>>
>>>> Then your very example of the Boeing plane is wrong
>>>> because no one has said the cause of the accident
>>>> was improperly coded software.
>>>
>>> Yes, it was an improper spec, with dangerous reliance
>>> on poor hardware.
>>
>> Thanks Win. That guy is nuts. Boeing most certainly
>> did announce just a few months ago, that it was a
>> software fault.
>
> That's the opposite of my position. I'm sure the coders
> made the software do exactly what they were told to make
> it do.

But nobody ever writes a requirement document at the level of detail
that the programmers will work to. And few requirement docs are
all-correct and all-inclusive.

It sure helps if the programmers understand, and take responsibility
for, the actual system.

jla...@highlandsniptechnology.com

unread,
Jan 13, 2020, 9:50:24 PM1/13/20
to
On Sat, 11 Jan 2020 15:44:23 +0000 (UTC),
DecadentLinux...@decadence.org wrote:

>Winfield Hill <winfie...@yahoo.com> wrote in news:qvcpg901bm2
>@drn.newsguy.com:
>
>> DecadentLinux...@decadence.org wrote...
>>>
>>> Winfield Hill wrote:
>>>
>>>> Rick C wrote...
>>>>>
>>>>> Then your very example of the Boeing plane is wrong
>>>>> because no one has said the cause of the accident
>>>>> was improperly coded software.
>>>>
>>>> Yes, it was an improper spec, with dangerous reliance
>>>> on poor hardware.
>>>
>>> Thanks Win. That guy is nuts. Boeing most certainly
>>> did announce just a few months ago, that it was a
>>> software fault.
>>
>> That's the opposite of my position. I'm sure the coders
>> made the software do exactly what they were told to make
>> it do. It was system engineers and their managers, who
>> made the decisions and wrote the software specs. They
>> should not be allowed to simply blame "the software".
>>
>>
>
> Well, it WAS the finished product that failed, but the true failure
>was their ability to ensure proper, robust, failsafe coding.

Is there such a thing? Electronic design is based on physics and
corillary principles. I don't know of any hard principles that
programming applies. It's more of a craft than a science.

I think that electronics is also easier to design review than
software.

jla...@highlandsniptechnology.com

unread,
Jan 13, 2020, 9:52:00 PM1/13/20
to
On Sat, 11 Jan 2020 17:31:26 -0500, bitrex <us...@example.net> wrote:

>On 1/11/20 9:47 AM, jla...@highlandsniptechnology.com wrote:
>> On Fri, 10 Jan 2020 21:46:19 -0800 (PST), omni...@gmail.com wrote:
>>
>>> Hardware designs are more rigorously done than
>>> software designs. A large company had problems with a 737
>>> and a rocket to the space station...
>>>
>>> https://www.bloomberg.com/news/articles/2019-06-28/boeing-s-737-max-software-outsourced-to-9-an-hour-engineers
>>>
>>> I know programmers who do not care for rigor at home at work.
>>> I did hardware design with rigor and featuring reviews by caring
>>> electronics design engineers and marketing engineers.
>>>
>>> Software gets sloppy with OOPs.
>>> Object Oriented Programming.
>>> Windows 10 on a rocket to ISS space station.
>>> C++ mud.
>>
>> The easier it is to change things, the less careful people are about
>> doing them. Software, which includes FPGA code, seldom works the first
>> time. Almost never. The average hunk of fresh code has a mistake
>> roughly every 10 lines. Or was that three?
>>
>> FPGAs are usually better than procedural code, but are still mostly
>> done as hack-and-fix cycles, with software test benches. When we did
>> OTP (fuse based) FPGAs without test benching, we often got them right
>> first try. If compiles took longer, people would be more careful.
>>
>> PCBs usually work the first time, because they are checked and
>> reviewed, and that is because mistakes are slow and expensive to fix,
>> and very visible to everyone. Bridges and buildings are almost always
>> right the first time. They are even more expensive and slow and
>> visible.
>>
>> Besides, electronics and structures have established theory, but
>> software doesn't. Various people just sort of do it.
>>
>> My Spice sims are often wrong initially, precisely because there are
>> basically no consequences to running the first try without much
>> checking. That is of course dangerous; we don't want to base a
>> hardware design on a sim that runs and makes pretty graphs but is
>> fundamentally wrong.
>>
>
>Don't know why C++ is getting the rap here. Modern C++ design is
>rigorous, there are books about what to do and what not to do, and the
>language has built-in facilities to ensure that e.g. memory is never
>leaked, pointers always refer to an object that exists, and the user
>can't ever add feet to meters if they're not supposed to.

Pointers are evil.

jla...@highlandsniptechnology.com

unread,
Jan 13, 2020, 9:56:21 PM1/13/20
to
On Mon, 13 Jan 2020 09:27:19 -0000, RBlack <ne...@rblack01.plus.com>
wrote:

>In article <d0nj1f50mabot5tnf...@4ax.com>,
>jla...@highlandsniptechnology.com says...
>>
>[snip]
>>
>> My Spice sims are often wrong initially, precisely because there are
>> basically no consequences to running the first try without much
>> checking. That is of course dangerous; we don't want to base a
>> hardware design on a sim that runs and makes pretty graphs but is
>> fundamentally wrong.
>
>I just got bitten by a 'feature' of LTSpice XVII, I don't remeber IV
>having this behaviour but I don't have it installed any more:
>
>If you make a tweak to a previously working circuit, which makes the
>netlister fail (in my case it was an inductor shorted to ground at both
>ends), it will pop up a warning to this effect, and then *run the sim
>using the old netlist*.

Well, don't ignore the warning.

>
>It will then allow you to probe around on the new schematic, but the
>schematic nodes are mapped onto the old netlist, so depending on what
>you tweaked, what is displayed can range from slightly wrong to flat-out
>impossible.
>
>Anyone else seen this?

LT4 would complain about, say, one end of a cap floating, or your
shorted inductor. The new one doesn't. I prefer it the new way.

I haven't seen the old/new netlist thing that you describe.

Rick C

unread,
Jan 14, 2020, 12:53:12 AM1/14/20
to
On Monday, January 13, 2020 at 7:35:20 PM UTC-5, mpm wrote:
> On Saturday, January 11, 2020 at 1:10:56 AM UTC-5, Rick C wrote:
> > I think that is a load. Hardware often fouls up. The two space shuttle disasters were both hardware problems and both were preventable, but there was a clear lack of rigor in the design and execution. The Apollo 13 accident was hardware. The list goes on and on.
> >
> > Then your very example of the Boeing plane is wrong because no one has said the cause of the accident was improperly coded software.
>
> Technically, one of those shuttle disasters was due to management not listening to their engineers, including those at Morton-Thiokol, that the booster rocket O-Rings were unsafe to launch at cold temperature.
>
> I don't consider that to be a "hardware problem" so much as an arrogantly stupid decision to launch under known, unsafe conditions.

I can't believe you are nit-picking this. Even if it isn't your definition of a hardware problem, it certainly isn't a software problem and that was the issue being discussed, software vs. hardware. There's no reason to discuss wetware issues other than how they impact software and hardware and in this case it was hardware that failed from the abuse by the wetware.

I guess what I'm really saying is, so what?


> As for the tiles (2nd shuttle loss), I am weirdly reminded of the Siegfried & Roy Vegas act with the white lions and tigers. They insured against every conceivable possibility (including the performance animals jumping into the crowd and causing a panic!). Everything that is, except the tiger viciously attacking Roy Horn on-stage.

Except that's not what happened. Go read about it. I get tired of educating you.


> You think you could see that coming..., or at least have a plan (however remote the possibility)?
>
> With the shuttle heat tiles, NASA had to replace a lot of those after every flight. Did they never see the tiger?

I think either, you again don't understand what happened, or you have simplified your understanding of the accident to "tiles fell off". I'll discuss this further with you if you want, but only after you educate yourself with the facts.

--

Rick C.

-++ Get 1,000 miles of free Supercharging
-++ Tesla referral code - https://ts.la/richard11209

Clifford Heath

unread,
Jan 14, 2020, 1:11:15 AM1/14/20
to
On 14/1/20 1:46 pm, jla...@highlandsniptechnology.com wrote:
> On 11 Jan 2020 07:27:05 -0800, Winfield Hill <winfie...@yahoo.com>
> wrote:
>
>> DecadentLinux...@decadence.org wrote...
>>>
>>> Winfield Hill wrote:
>>>
>>>> Rick C wrote...
>>>>>
>>>>> Then your very example of the Boeing plane is wrong
>>>>> because no one has said the cause of the accident
>>>>> was improperly coded software.
>>>>
>>>> Yes, it was an improper spec, with dangerous reliance
>>>> on poor hardware.
>>>
>>> Thanks Win. That guy is nuts. Boeing most certainly
>>> did announce just a few months ago, that it was a
>>> software fault.
>>
>> That's the opposite of my position. I'm sure the coders
>> made the software do exactly what they were told to make
>> it do.
>
> But nobody ever writes a requirement document at the level of detail
> that the programmers will work to. And few requirement docs are
> all-correct and all-inclusive.

Your comments lack nuance.

The definition of "all-correct" can only be made with reference to a
Turing machine that implements it.

This, the finished code is the (first and only) finished specification.

Collorary: If a specification is all-correct and all-inclusive, a
compiler can be written that implements it precisely.

The trouble is, no-one can tell whether the specification meets the
high-level goals of the system - not even the programmer usually.

The reason for "formal methods" is to be able to state the "high level
goals" in a precise way, and to show that the code cannot fail to meet
those goals.

CH.

Rick C

unread,
Jan 14, 2020, 1:14:30 AM1/14/20
to
On Monday, January 13, 2020 at 9:46:30 PM UTC-5, jla...@highlandsniptechnology.com wrote:
>
> But nobody ever writes a requirement document at the level of detail
> that the programmers will work to. And few requirement docs are
> all-correct and all-inclusive.
>
> It sure helps if the programmers understand, and take responsibility
> for, the actual system.

What Larkman doesn't understand is that the sort of formal requirements documents he is talking about are written for large, complex systems that he knows literally nothing about. Having never participated in such a design process he doesn't even understand that the programmers can't always know much about the things they are writing code for, because they can't be expert in all parts of the system they are writing code for.

So instead of expecting the coders to sanity check systems they don't and literally can't understand, just as no one in the company understands the entire airplane, they use the documents they are provided to define the software they are writing and then test according to the requirements that apply to that software. They don't try to analyze the requirements in the context of the rest of the system because that has already been done.

Larkman also doesn't understand that the requirements documents are written at every level of decomposition so that each requirement can be traced to the modules that are responsible for implementing it. It's a large process, but is essential to making sure the airplane does what you want it to. Can the process fail, yes, it's a human process after all. But it's a whole lot better than the Larkman method of having one guy in charge of everything and he does the hard part for everyone and lets them finish the work that he started. I guess we could design bicycles that way, but not airplanes.

I remember dealing with a layout guy who was pretty good, but was used to thinking in terms of absolute rules without always understanding them. He had a big power pour area running across the board to reach a resistor in a part of the circuit that was only to measure the voltage on the power plane. I told him he didn't need to make that run so fat, it could just be a thin trace like any other signal and explained what it was for. He refused to change it saying that was how you route power planes. Rather than fight that idea, I had him move the resistor to the area of the power plane and run a thin trace over to the rest of the circuit. He didn't like the idea, but couldn't argue, so did it my way.

This shows why programmers don't get to change low level requirements on their own. They either go through the process of pushing back on the high level requirements while they are being defined, or they code what needs to be coded as the requirements state. If the decision makers say the MCAS needs to work this way, the coders are not in a position to make changes once the requirements have been decomposed to the module level. It's not like the people doing the design work didn't give it a lot of thought. Having coders change the requirements would be like cops changing the laws they have to enforce.

I guess it's a good thing Larkman isn't a cop either.

--

Rick C.

+-- Get 1,000 miles of free Supercharging
+-- Tesla referral code - https://ts.la/richard11209

Rick C

unread,
Jan 14, 2020, 1:23:47 AM1/14/20
to
On Tuesday, January 14, 2020 at 1:11:15 AM UTC-5, Clifford Heath wrote:
>
> Your comments lack nuance.
>
> The definition of "all-correct" can only be made with reference to a
> Turing machine that implements it.
>
> This, the finished code is the (first and only) finished specification.
>
> Collorary: If a specification is all-correct and all-inclusive, a
> compiler can be written that implements it precisely.

Sorry, that is simply wrong. You can specify the behavior of a module without enough detail for a compiler to spit out code unless that compiler had a vast array of tools and libraries at its disposal. So I guess in theory, a compiler could be written, but it would be a ginormous task such as compiling the English language to computer code.

So, in either case, possible or not, your statement is of no practical value.


> The trouble is, no-one can tell whether the specification meets the
> high-level goals of the system - not even the programmer usually.

Huh???


> The reason for "formal methods" is to be able to state the "high level
> goals" in a precise way, and to show that the code cannot fail to meet
> those goals.

What does that have to do with your compiler statement? First you say specifications can't be fully complete and then you say they can be written "in a precise way". Are you say "precise" as in easy to code but not necessarily complete???

--

Rick C.

+-+ Get 1,000 miles of free Supercharging
+-+ Tesla referral code - https://ts.la/richard11209

Tom Gardner

unread,
Jan 14, 2020, 3:19:32 AM1/14/20
to
HP hired me because I was both. Various parts of HP were
very different from each other.


> One big company that we work with has, I've heard, 12 levels of
> engineering management. If an EE group wants a hole drilled in a
> chassis, the request has to propagate up 5 or six management levels,
> and then back down, to get to a mechanical engineer. Software is
> similarly insulated. Any change fires off an enormous volume of
> paperwork and customer "copy exact" notices, so most things just never
> get done.

So you /do/ understand how programmers couldn't be
held responsible for implementing the spec.

At HP, if I had been promoted 6 times, I would
have been the CEO

Tom Gardner

unread,
Jan 14, 2020, 3:23:55 AM1/14/20
to
>> My Spice sims are often wrong initially, precisely because there are
>> basically no consequences to running the first try without much
>> checking. That is of course dangerous; we don't want to base a
>> hardware design on a sim that runs and makes pretty graphs but is
>> fundamentally wrong.
>>
>
> Don't know why C++ is getting the rap here. Modern C++ design is rigorous, there
> are books about what to do and what not to do, and the language has built-in
> facilities to ensure that e.g. memory is never leaked, pointers always refer to
> an object that exists, and the user can't ever add feet to meters if they're not
> supposed to.
>
> If the developer chooses to ignore it all like they always know better than the
> people who wrote the books on it, well, God bless...

Read the C++ FQA http://yosefk.com/c++fqa/

I'm particularly fond of the const correctness section :)

Martin Brown

unread,
Jan 14, 2020, 3:34:16 AM1/14/20
to
On 12/01/2020 20:20, Phil Hobbs wrote:
> On 2020-01-12 11:58, Martin Brown wrote:
>> On 11/01/2020 14:57, jla...@highlandsniptechnology.com wrote:
>>> On 11 Jan 2020 05:57:59 -0800, Winfield Hill <winfie...@yahoo.com>
>>> wrote:
>>>
>>>> Yes, it was an improper spec, with dangerous reliance
>>>> on poor hardware.
>>>
>>> If code kills people, it was improperly coded.
>>
>> Not necessarily. The code written may well have exactly implemented
>> the algorithm(s) that the clowns supervised by monkeys specified. It
>> isn't the job of programmers to double check the workings of the
>> people who do the detailed calculations of aerodynamic force vectors
>> and torques.
>>
>> It is not the programmers fault if the systems engineering, failure
>> analysis and aerodynamics calculations are incorrect in some way!
>
> That's a bit facile, I think. Folks who take an interest in their
> professions aren't that easy to confine that way.

Depends how the development is being done. One way is a formal software
specification that is handed to an outsourced team of cheap coders. They
literally have no idea what anything does beyond the boundaries of the
functional module specification that they have been given to implement.

My boss was pushing for that modus operandi just before I quit.

The idea is that you have well specified software modules in much the
same way as IC's that have datasheets describing exactly what they do.
It works pretty well for numerical analysis for instance NAGLIB.
(way more reliable than rolling your own code)

In in ideal software component model it can work. However, one place I
knew referred to their code repository (in jargon term at the time) s/re/su/
Problem was that stuff too often got put into it that was not fit for
purpose and would bite anyone foolish enough to reuse it very badly.

> Back in my one foray into big-system design, we design engineers were
> always getting in the systems guys' faces about various pieces of
> stupidity in the specs.  It was all pretty good-natured, and we wound up
> with the pain and suffering distributed about equally.

+1


--
Regards,
Martin Brown

RBlack

unread,
Jan 14, 2020, 4:15:41 AM1/14/20
to
In article <o7bq1f54cmsvthkp8...@4ax.com>,
jla...@highlandsniptechnology.com says...
>
> On Mon, 13 Jan 2020 09:27:19 -0000, RBlack <ne...@rblack01.plus.com>
> wrote:
>
> >In article <d0nj1f50mabot5tnf...@4ax.com>,
> >jla...@highlandsniptechnology.com says...
> >>
> >[snip]
> >>
> >> My Spice sims are often wrong initially, precisely because there are
> >> basically no consequences to running the first try without much
> >> checking. That is of course dangerous; we don't want to base a
> >> hardware design on a sim that runs and makes pretty graphs but is
> >> fundamentally wrong.
> >
> >I just got bitten by a 'feature' of LTSpice XVII, I don't remeber IV
> >having this behaviour but I don't have it installed any more:
> >
> >If you make a tweak to a previously working circuit, which makes the
> >netlister fail (in my case it was an inductor shorted to ground at both
> >ends), it will pop up a warning to this effect, and then *run the sim
> >using the old netlist*.
>
> Well, don't ignore the warning.

Yep. Although it looks like 'warning' should be 'fatal error'. I'm
pretty sure LT4 would refuse to run the sim at all with no valid
netlist, rather than use the last-known-good one.

>
> >
> >It will then allow you to probe around on the new schematic, but the
> >schematic nodes are mapped onto the old netlist, so depending on what
> >you tweaked, what is displayed can range from slightly wrong to flat-out
> >impossible.
> >
> >Anyone else seen this?
>
> LT4 would complain about, say, one end of a cap floating, or your
> shorted inductor. The new one doesn't. I prefer it the new way.
>
> I haven't seen the old/new netlist thing that you describe.

Another recent one was a boost switcher. I had that working OK, then
added a linear post-regulator, using a model from TI. This added a
bunch of extra nodes to the netlist. The TI model turned out to have a
typo (the warning said something along the lines of 'diode D_XYZ
undefined. Using ideal diode model instead.'

The sim appeared to run OK anyway, but the FET dissipation trace was now
multiplying the wrong node voltages/currents (node names from the old
netlist) and it was out by an order of magnitude. Once I found the typo
and fixed it everything ran fine.
I suppose labelling all the nodes would also have caught that one.

I found LT4 more comfortable to use. Still, I can't complain about the
price. We have a bunch of PSPICE licenses (came bundled with OrCAD) but
LTSPICE is good enough that I've never even tried running it.

Clifford Heath

unread,
Jan 14, 2020, 4:28:12 AM1/14/20
to
On 14/1/20 5:23 pm, Rick C wrote:
> On Tuesday, January 14, 2020 at 1:11:15 AM UTC-5, Clifford Heath wrote:
>>
>> Your comments lack nuance.
>>
>> The definition of "all-correct" can only be made with reference to a
>> Turing machine that implements it.


^^^^^ This. You fail to understand this. It invalidates the rest of your
ignorant complaints.

End. You clearly don't get it, and I'm not going to waste more time on you.

RBlack

unread,
Jan 14, 2020, 4:34:24 AM1/14/20
to
On Tue, 14 Jan 2020 08:19:27 +0000, Tom Gardner
<spam...@blueyonder.co.uk> said:
>
> On 14/01/20 02:43, jla...@highlandsniptechnology.com wrote:
> > On Mon, 13 Jan 2020 19:26:26 +0000, Tom Gardner
> > <spam...@blueyonder.co.uk> wrote:
> >
> >> On 13/01/20 17:41, John Larkin wrote:
> >>> On Mon, 13 Jan 2020 16:40:55 +0000, Tom Gardner
> >>> <spam...@blueyonder.co.uk> wrote:

[snip]

> >> I've turned down job offers where the HR droids couldn't
> >> deal with someone that successfully straddles both
> >> hardware and software worlds.
> >
> > I interviewed with HP once. The guy looked at my resume and said "The
> > first thing you need to do is decide whether you're an engineer or a
> > programmer", so I walked out.
>
> HP hired me because I was both. Various parts of HP were
> very different from each other.

HP's tape drives division was my first 'proper' gig as an EE. They
didn't pigeon-hole people either, the hardware guys could write their
own test code if needed and the embedded software guys could debug their
code using a scope.

Next job was a small startup where everybody had to be a jack-of-all-
trades. Later on, as we grew and took on more people, it came as a bit
of a shock that the 'straddlers' were a tiny minority. It's something
we still struggle with when trying to hire people.

Rick C

unread,
Jan 14, 2020, 4:44:53 AM1/14/20
to
On Tuesday, January 14, 2020 at 4:28:12 AM UTC-5, Clifford Heath wrote:
> On 14/1/20 5:23 pm, Rick C wrote:
> > On Tuesday, January 14, 2020 at 1:11:15 AM UTC-5, Clifford Heath wrote:
> >>
> >> Your comments lack nuance.
> >>
> >> The definition of "all-correct" can only be made with reference to a
> >> Turing machine that implements it.
>
>
> ^^^^^ This. You fail to understand this. It invalidates the rest of your
> ignorant complaints.

I especially like the way you toss into the conversation totally unsupported statements. I expect you have no real familiarity with the process of developing code using requirements.


> End. You clearly don't get it, and I'm not going to waste more time on you.

I think that would please us both.

--

Rick C.

++- Get 1,000 miles of free Supercharging
++- Tesla referral code - https://ts.la/richard11209

Clifford Heath

unread,
Jan 14, 2020, 5:06:40 AM1/14/20
to
On 14/1/20 8:44 pm, Rick C wrote:
> On Tuesday, January 14, 2020 at 4:28:12 AM UTC-5, Clifford Heath wrote:
>> On 14/1/20 5:23 pm, Rick C wrote:
>>> On Tuesday, January 14, 2020 at 1:11:15 AM UTC-5, Clifford Heath wrote:
>>>>
>>>> Your comments lack nuance.
>>>>
>>>> The definition of "all-correct" can only be made with reference to a
>>>> Turing machine that implements it.
>>
>>
>> ^^^^^ This. You fail to understand this. It invalidates the rest of your
>> ignorant complaints.
>
> I expect you have no real familiarity with the process of developing code using requirements.

You'd be an idiot then.

Literally thousands of projects. Hell I have an archive here of over
three hundred projects' documents (several from each project, starting
with requirements) on which I participated or led the engineering teams,
and those are just from the 1990s (one of my four decades in the
software industry).

Much of that code is still running on tens of millions of machines
around the globe, coordinating systems management for mission-critical
functions in the world's largest enterprises.

Naah, I know nothing about software dev. Nothing you could learn anyhow.

Rick C

unread,
Jan 14, 2020, 5:17:49 AM1/14/20
to
I think your words speak volumes more than your resume.

I thought you were done talking to me???

BTW, you never provided any support for your statement about Turing machines. Do you have anything on that in your hundreds of project folders? I thought not.

This sort of discussion is pretty simple. If you make a claim, you should be able to support it with something more than "I'm an expert". I really don't get all the bluster when all you needed to do is provide some basis for the statement. But instead you choose to insult me on a personal level.

Yeah, I'm sure you were quite the project leader.

--

Rick C.

+++ Get 1,000 miles of free Supercharging
+++ Tesla referral code - https://ts.la/richard11209

Clifford Heath

unread,
Jan 14, 2020, 7:04:16 AM1/14/20
to
On 14/1/20 9:17 pm, Rick C wrote:
> On Tuesday, January 14, 2020 at 5:06:40 AM UTC-5, Clifford Heath wrote:
>> On 14/1/20 8:44 pm, Rick C wrote:
>>> On Tuesday, January 14, 2020 at 4:28:12 AM UTC-5, Clifford Heath wrote:
>>>> On 14/1/20 5:23 pm, Rick C wrote:
>>>>> On Tuesday, January 14, 2020 at 1:11:15 AM UTC-5, Clifford Heath wrote:
>>>>>>
>>>>>> Your comments lack nuance.
>>>>>>
>>>>>> The definition of "all-correct" can only be made with reference to a
>>>>>> Turing machine that implements it.
>>>>
>>>>
>>>> ^^^^^ This. You fail to understand this. It invalidates the rest of your
>>>> ignorant complaints.
>>>
>>> I expect you have no real familiarity with the process of developing code using requirements.
>>
>> You'd be an idiot then.
>>
>> Literally thousands of projects. Hell I have an archive here of over
>> three hundred projects' documents (several from each project, starting
>> with requirements) on which I participated or led the engineering teams,
>> and those are just from the 1990s (one of my four decades in the
>> software industry).
>>
>> Much of that code is still running on tens of millions of machines
>> around the globe, coordinating systems management for mission-critical
>> functions in the world's largest enterprises.
>>
>> Naah, I know nothing about software dev. Nothing you could learn anyhow.
>
> I think your words speak volumes more than your resume.
>
> I thought you were done talking to me???

I said I was done trying to teach you the theory of computation.

> Yeah, I'm sure you were quite the project leader.

Principal engineer. Held that title in that company for 12 of the 17
years I was there. I was also a founder.

DecadentLinux...@decadence.org

unread,
Jan 14, 2020, 8:03:23 AM1/14/20
to
Clifford Heath <no....@please.net> wrote in news:3YcTF.32875$Mc.7726
@fx35.iad:

> Collorary: If a specification is all-correct and all-inclusive, a
> compiler can be written that implements it precisely.
>

FPGA (as one example)programming is the programmer telling the
hardware what switches he wants it to use in what order, etc. So
just like a hardware built device like a clock the signals on the
hour but is 100% hardware driven, electronics can be built with or
without 'processors' and still have events get 'processed'.

Programming (and the electronics behind it) is just our refinement
of Frankenstein's big double blade throw switch on the wall.

Programming against a fault condition in a mission critical setting
is rife with problems.

Like the attitude indictaor. Why would one even freeze up? Pretty
cold up there in that airstream. So built a unit that has built in
mechanical function protections in it to ensure it never stops
working and never gives a false reading based on a failed mechanical
aspect of its operation. Easy to say.

I suggested maybe heating the thing internally (the part that is
inside the aircraft skin) and placing a mechanism in there that
allows it to be 'swung' through it entire range of motion as a test
of freedom of movement, and then released for use again. Could have
sensors and a computer watching the test run and looking at bearing
temps, etc.. Then it would decide the unit is good and can be relied
on for an accurate reaing, If there is not a bird hanging off the
thing outside or if it got sheared off clean yet still was able to be
rotated in the test, the two most extreme failure modes.

jla...@highlandsniptechnology.com

unread,
Jan 14, 2020, 11:28:12 AM1/14/20
to
On Tue, 14 Jan 2020 09:15:36 -0000, RBlack <ne...@rblack01.plus.com>
When I get a warning, I fix it before I run the sim. That would
explain why I haven't seen the old-netlist-runs thing.

I do label a lot of nodes, but just the interesting ones, not all.

I need to force myself to check all the named nodes when I copy/paste
bits of a circuit. It duplicates all named nodes, which creates some
interesting shorts.

jla...@highlandsniptechnology.com

unread,
Jan 14, 2020, 11:39:44 AM1/14/20
to
On Tue, 14 Jan 2020 08:19:27 +0000, Tom Gardner
I understand that some people are content to just do their jobs and
cash their checks.

I recently discovered that one group has been responsible, for almost
20 years, for a bunch of instrumentation of which over 80% doesn't
work, and which is not used. But they still get their paychecks, so
don't rock the boat.




>
>At HP, if I had been promoted 6 times, I would
>have been the CEO
>
>
>>
>>
>>>
>>>> Recipe for disaster.
>>>
>>> Yup, as we've seen.
>>
>>


jla...@highlandsniptechnology.com

unread,
Jan 14, 2020, 11:43:07 AM1/14/20
to
Hey, you said that you wouldn't waste more time on him.

Steve Wilson

unread,
Jan 14, 2020, 11:44:01 AM1/14/20
to
jla...@highlandsniptechnology.com wrote:

> I do label a lot of nodes, but just the interesting ones, not all.

> I need to force myself to check all the named nodes when I copy/paste
> bits of a circuit. It duplicates all named nodes, which creates some
> interesting shorts.

When you name the nodes, use names made from adjacent components, such as
R1C1, Q1B, U1N, etc.

When you copy and paste, the component reference designations will change.
You can easily find the erroneous node names since they won't match the
adjacent components.

Steve Wilson

unread,
Jan 14, 2020, 11:49:53 AM1/14/20
to
jla...@highlandsniptechnology.com wrote:

> I do label a lot of nodes, but just the interesting ones, not all.

> I need to force myself to check all the named nodes when I copy/paste
> bits of a circuit. It duplicates all named nodes, which creates some
> interesting shorts.

When you name a node, use names made from adjacent components, such as R1C1,
Q1B, U1N, etc.

When you copy and post, the component reference designations will change, but
the named nodes will remain the same. You can easily find them since they
won't match the new reference designations.

Tom Gardner

unread,
Jan 14, 2020, 12:27:48 PM1/14/20
to
So, what is in the job description of the programmers
under consideration? I'll bet the prime statement is
"implement the specification using the defined processes"


> I recently discovered that one group has been responsible, for almost
> 20 years, for a bunch of instrumentation of which over 80% doesn't
> work, and which is not used. But they still get their paychecks, so
> don't rock the boat.

Nothing new there!

Phil Hobbs

unread,
Jan 14, 2020, 12:36:03 PM1/14/20
to
I'd rather cut my own throat than do that for 20 years. Sometimes my
stuff doesn't work either, but that's due to it being insanely hard. A
lot of the insanely hard stuff works really well though, which makes it
all worthwhile. (I've often said that my ideal project is building a
computer starting with sand--it's a tendency I have to fight.)

Client work almost always succeeds, and the occasional failures are
mostly due to the customer's prevarication, such as taking my
proof-of-concept system, giving it to a CE outfit, and then pulling me
back in to attempt to fix the CE's mess--of course at the last minute,
when they've almost run out of money. That's happened a couple of
times, so I try very hard to discourage it. (The two were the
transcutaneous blood glucose/alcohol system and the blood-spot detector
for hens' eggs.)

Cheers

Phil Hobbs

--
Dr Philip C D Hobbs
Principal Consultant
ElectroOptical Innovations LLC / Hobbs ElectroOptics
Optics, Electro-optics, Photonics, Analog Electronics
Briarcliff Manor NY 10510

http://electrooptical.net
http://hobbs-eo.com

jla...@highlandsniptechnology.com

unread,
Jan 14, 2020, 12:49:10 PM1/14/20
to
On Tue, 14 Jan 2020 12:35:56 -0500, Phil Hobbs
<pcdhSpamM...@electrooptical.net> wrote:

>>
>> I recently discovered that one group has been responsible, for almost
>> 20 years, for a bunch of instrumentation of which over 80% doesn't
>> work, and which is not used. But they still get their paychecks, so
>> don't rock the boat.
>
>I'd rather cut my own throat than do that for 20 years. Sometimes my
>stuff doesn't work either, but that's due to it being insanely hard. A
>lot of the insanely hard stuff works really well though, which makes it
>all worthwhile. (I've often said that my ideal project is building a
>computer starting with sand--it's a tendency I have to fight.)

When our stuff doesn't work, it's usually because of some dumb
mistake, which we can fix.

The other kind of "failure" is when our stuff works, but the
customer's system or product doesn't work, or doesn't sell, or after
we do it, they discover that they can do it themselves.



>
>Client work almost always succeeds, and the occasional failures are
>mostly due to the customer's prevarication, such as taking my
>proof-of-concept system, giving it to a CE outfit, and then pulling me
>back in to attempt to fix the CE's mess--of course at the last minute,
>when they've almost run out of money. That's happened a couple of
>times, so I try very hard to discourage it. (The two were the
>transcutaneous blood glucose/alcohol system and the blood-spot detector
>for hens' eggs.)
>
>Cheers
>
>Phil Hobbs


--

Tom Gardner

unread,
Jan 14, 2020, 12:53:45 PM1/14/20
to
I very deliberately avoided the "20 years experience
being 1 year repeated 20 times" trap.

I use a specific example from my early career, and the
technique I used to avoid it, to sensitise youngsters
to the kind of decisions they may face in the future.

Herbert's “they’d chosen always the clear, safe course
that leads ever downward into stagnation.” was an
awful warning for me.

But in some companies, and worse industries, that can be
a very difficult trap to avoid.

three_jeeps

unread,
Jan 14, 2020, 2:43:52 PM1/14/20
to
On Sunday, January 12, 2020 at 7:33:40 PM UTC-5, Phil Hobbs wrote:
> On 2020-01-12 19:13, jjhu...@gmail.com wrote:
> > On Sunday, January 12, 2020 at 5:55:08 PM UTC-5, Phil Hobbs wrote:
> >> On 2020-01-12 17:38, jjhu...@gmail.com wrote:
> >>> On Sunday, January 12, 2020 at 3:32:06 PM UTC-5,
> >>> DecadentLinux...@decadence..org wrote:
> >>>> Phil Hobbs <pcdhSpamM...@electrooptical.net> wrote in
> >>>> news:fb4888b5-e96f-1145...@electrooptical.net:
> >>>>
> >>>>> Back in my one foray into big-system design, we design
> >>>>> engineers were always getting in the systems guys' faces
> >>>>> about various pieces of stupidity in the specs. It was all
> >>>>> pretty good-natured, and we wound up with the pain and
> >>>>> suffering distributed about equally.
> >>>>>
> >>>>>
> >>>>
> >>>> That is how men get work done... even 'the programmers'. Very
> >>>> well said, there.
> >>>>
> >>>> That is like the old dig on 'the hourly help'.
> >>>>
> >>>> Some programmers are very smart. Others not so much.
> >>>>
> >>>> I guess choosing to go into it is not such a smart move so
> >>>> they take a hit from the start. :-)
> >>>
> >>
> >>> If that is how men get work done then they are not using
> >>> software and system engineering techniques developed in the last
> >>> 15-20 years and their results are *still* subject to the same
> >>> types of errors. I do research and teach in this area. A number
> >>> of studies, and one in particular, cites up to 70% of software
> >>> faults are introduced on the LHS of the 'V' development model
> >>> (Other software design lifecycle models have similar fault
> >>> percentages.) A major issue is that most of these errors are
> >>> observed at integration time (software+software,
> >>> software+hardware). The cost of defect removal along the RHS of
> >>> the 'V' development model is anywhere from 50-200X of the removal
> >>> cost along the LHS of the 'V'. (no wonder why systems cost so
> >>> much)
> >>
> >> Nice rant. Could you tell us more about the 'V' model?
> >>
> >>> The talk about errors in this thread are very high level and
> >>> most ppl have the mindset that they are thinking about errors at
> >>> the unit test level. There are numerous techniques developed to
> >>> identify and fix fault types throughout the entire development
> >>> lifecycle but regrettably a lot of them are not employed.
> >>
> >> What sorts of techniques to you use to find problems in the
> >> specifications?
> >>> Actually a large percentage of the errors are discovered and
> >>> fixed at that level. Errors of the type: units mismatch, variable
> >>> type mismatch, and a slew of concurrency issues aren't discovered
> >>> till integration time. Usually, at that point, there is a 'rush'
> >>> to get the system fielded. The horror stories and lessons learned
> >>> are well documented.
> >>
> >> Yup. Leaving too much stuff for the system integration step is a
> >> very very well-known way to fail.
> >>
> >>> IDK what exactly happened (yet) with the Boeing MAX development.
> >>> I do have info from some sources that cannot be disclosed at
> >>> this point. From what I've read, there were major mistakes made
> >>> from inception through implementation and integration. My
> >>> personal view, is that one should almost never (never?) place the
> >>> task on software to correct an inherently unstable airframe
> >>> design - it is putting a bandaid on the source of the problem.
> >>
> >> It's commonly done, though, isn't it? I remember reading Ben
> >> Rich's book on the Skunk Works, where he says that the F-117's very
> >> squirrelly handling characteristics were fixed up in software to
> >> make it a beautiful plane to fly. That was about 1980.
> >>
> >>> Another major issue is the hazard analysis and fault tolerance
> >>> approach was not done at the system (the redundancy approach
> >>> was pitiful, as well as the *logic* used in implementing it as
> >>> well as conceptual.
> >>
> >>> I do think that the better software engineers do have a more
> >>> holistic view of the system (hardware knowledge + system
> >>> operational knowledge) which will allow them to ask questions
> >>> when things don't 'seem right.' OTHO, the software engineers
> >>> should not go making assumptions about things and coding to those
> >>> assumptions. (It happens more than you think) It is the job of
> >>> the software architect to ensure that any development assumptions
> >>> are captured and specified in the software architecture.
> >>
> >> In real life, though, it's super important to have two-way
> >> communications during development, no? My large-system experience
> >> was all hardware (the first civilian satellite DBS system,
> >> 1981-83), so things were quite a bit simpler than in a large
> >> software-intensive system. I'd expect the need for bottom-up
> >> communication to be greater now rather than less.
> >>
> >>> In studies I have looked at, the percentage of requirements
> >>> errors is somewhere between 30-40% of the overall number of
> >>> faults during the design lifecycle, and the 'industry standard'
> >>> approach approach to dealing with this problem is woefully
> >>> indequate despite techniques to detect and remove the errors. A
> >>> LOT Of time is spent doing software requirements tracing as
> >>> opposed to doing verification of requirements. People argue that
> >>> one cannot verify the requirements until the system has been
> >>> built - which is complete BS but industry is very slow to change.
> >>> We have shown that using software architecture modeling addresses
> >>> a large percentage of system level problems early in the design
> >>> life cycle. We are trying to convince industry. Until change
> >>> happens, the parade of failures like the MAX will continue.
> >>
> >> I'd love to hear more about that.
> >>
> >> Cheers
> >>
> >> Phil Hobbs
> >>
>
> > Sorry - I get a bit carried away on this topic... For requirements
> > engineering verification one can google: formal and semi-formal
> > requirements specification languages. RDAL and ReqSpec are ones I am
> > familiar with. Techniques to verify requirements include model
> > checking. Google model checking. Based of formal logic like LTL
> > (Linear temporal logic) CTL (Compositional Tree Logic. One constructs
> > state models from requirements and uses model checking engines to
> > analyze the structures. Model checking was actually used to verify a
> > bus protocol in the early 90s and found *lots* of problems with the
> > spec...that caused industry to 'wake up'. There are others that work
> > on code, but these are very much research-y efforts.
> >
> > Simulink has a model checker in its toolboxes (based on Promala) it
> > is quite good).
> >
> > We advocate using architecture design languages (ADL's) that is a
> > formal modeling notation to model different views of the architecture
> > and capture properties of the system from which analysis can be done
> > (e.g. signal latency, variable format and property consistency,
> > processor utilization, bandwidth capacity, hazard analysis, etc.)
> > The one that I had a hand in designing is Architecture Analysis and
> > Design Language (AADL) It is an SAE Aerospace standard. IF things
> > turn out well, it will be used on the next generation of helecopters
> > for the army. We have been piloting it use on real systems for the
> > last 2-3 years, and last 10 years on pilot studies. For systems
> > hazard analysis, google STPA (System Theoretic Process Approach)
> > spearheaded by Nancy Leveson MIT (She has consulted to Boeing).
> >
> > Yes, I've seen software applied to fix hw problems but assessing the
> > risk is complicated. The results can be catastrophic. Ok, off my
> > rant....
> >
>
> Thanks. I feel a bit like I'm drinking from a fire hose, which is
> always my preferred way of learning stuff.... I'd be super interested
> in an accessible presentation of methods for sanity-checkin high-level
> system requirements.
>
> Being constitutionally lazy, I'm a huge fan of ways to work smarter
> rather than harder. ;)
>
> Cheers
>
> Phil Hobbs
>
>
> --
> Dr Philip C D Hobbs
> Principal Consultant
> ElectroOptical Innovations LLC / Hobbs ElectroOptics
> Optics, Electro-optics, Photonics, Analog Electronics
> Briarcliff Manor NY 10510
>
> http://electrooptical.net
> http://hobbs-eo.com

Phil, et al....
I meant to post some information wrt your inquiry about techniques to express and analyze requirements and about model checking but got OBE.
I found this slide set that rather concisely lays out the problem & approaches to express requirements.
https://www.iaria.org/conferences2018/filesICSEA18/RadekKoci_RequirementsModellingAndSoftwareSystemsImplementation.pdf

When I read through English text requirements, I tend to do two things simultaneously: map them to some abstract component in the system hierarchy (because the written requirements are usually spread all over the system, Re-express them in a semi formal or formal notation (usually semi-formal such as state-charts, ER diagrams, sequence diagrams, interaction diagrams.) This gives me an idea if things are collectively coherent. I look for conflicts and omissions primarily.
I then take my understanding of the components and their interactions and then construct an AADL model to: understand who talks to who, data communicated, and then map requirements to the components and do analysis on the model (signal flows and latency are usually the top properties. I then try to tease out what the fault tolerance approach is and model that, keeping in mind error types and look for error flow, mitigation approaches, etc.
If there is an area that is really confusing, I'll construct state models and use model checking. Some useful tools are nuSMV, http://nusmv.fbk.eu/
and SPIN http://spinroot.com/spin/whatispin.html
As a note, using model checking for the engineer can be a challenge. They have not seen anything like this in undergrad or grad school unless they are leaning more in computer science. We looked at this issue 20 years ago and produced a number of reports that tried to take the approach as a tool kit and identified types of analysis and patterns that could be identified and more easily applied by an engineer unfamiliar with the area. They are somewhere on the SEI website.

Speaking of model checking, below are two of the more often cited model checking approaches and successful applications. There is little 'how to' but more of here is the problem and how we solved it. (Details left to the reader ;) )

http://www.cs.cmu.edu/~emc/papers/Conference%20Papers/95_verification_fbc_protocol.pdf
https://link.springer.com/chapter/10.1007/3-540-60973-3_102

There is a report from NASA some years ago that gave some excellent guidelines in writing requirements - I can't locate it at the moment but this website has some good guidelines, many of which were in the NASA report.
https://qracorp.com/write-clear-requirements-document/
(It still amazes me that even now, requirements docs that I've seen don't do half of these things....)
Hope this helps
J

John Larkin

unread,
Jan 14, 2020, 2:45:50 PM1/14/20
to
On Tue, 14 Jan 2020 17:53:41 +0000, Tom Gardner
I was talking to my MD, a really wonderful lady, about problem
solving. The thing is, her mistakes might kill people, but I can blow
things up just to see what might happen.

--

John Larkin Highland Technology, Inc
picosecond timing precision measurement

jlarkin att highlandtechnology dott com
http://www.highlandtechnology.com

John Larkin

unread,
Jan 14, 2020, 2:48:31 PM1/14/20
to
On Tue, 14 Jan 2020 17:11:09 +1100, Clifford Heath
<no....@please.net> wrote:

>On 14/1/20 1:46 pm, jla...@highlandsniptechnology.com wrote:
>> On 11 Jan 2020 07:27:05 -0800, Winfield Hill <winfie...@yahoo.com>
>> wrote:
>>
>>> DecadentLinux...@decadence.org wrote...
>>>>
>>>> Winfield Hill wrote:
>>>>
>>>>> Rick C wrote...
>>>>>>
>>>>>> Then your very example of the Boeing plane is wrong
>>>>>> because no one has said the cause of the accident
>>>>>> was improperly coded software.
>>>>>
>>>>> Yes, it was an improper spec, with dangerous reliance
>>>>> on poor hardware.
>>>>
>>>> Thanks Win. That guy is nuts. Boeing most certainly
>>>> did announce just a few months ago, that it was a
>>>> software fault.
>>>
>>> That's the opposite of my position. I'm sure the coders
>>> made the software do exactly what they were told to make
>>> it do.
>>
>> But nobody ever writes a requirement document at the level of detail
>> that the programmers will work to. And few requirement docs are
>> all-correct and all-inclusive.
>
>Your comments lack nuance.

Absolutely. Sometimes common sense is safer than nuance.

(Not to start a political branch.)

John Larkin

unread,
Jan 14, 2020, 2:52:47 PM1/14/20
to
On Tue, 14 Jan 2020 16:43:56 -0000 (UTC), Steve Wilson <n...@spam.com>
wrote:
I'd rather use something that describes the signal, not the parts.
Like ADC_IN or something. So the plots make sense and can be used as
illustrations in manuals, for example.

--

John Larkin Highland Technology, Inc

bitrex

unread,
Jan 14, 2020, 4:31:45 PM1/14/20
to
On 1/13/20 9:51 PM, jla...@highlandsniptechnology.com wrote:
>>> My Spice sims are often wrong initially, precisely because there are
>>> basically no consequences to running the first try without much
>>> checking. That is of course dangerous; we don't want to base a
>>> hardware design on a sim that runs and makes pretty graphs but is
>>> fundamentally wrong.
>>>
>>
>> Don't know why C++ is getting the rap here. Modern C++ design is
>> rigorous, there are books about what to do and what not to do, and the
>> language has built-in facilities to ensure that e.g. memory is never
>> leaked, pointers always refer to an object that exists, and the user
>> can't ever add feet to meters if they're not supposed to.
>
> Pointers are evil.
>
>

That's why in modern times you avoid working with "naked" ones at all
costs. In architectures with managed memory like x86 and ARM with an
operating system there's pretty much no good reason to use naked
pointers at all unless you are yourself writing a memory manager or
allocator. There are test suites to find all potential memory leaks!
There's no good excuse to have programs that leak resources anymore...



Phil Hobbs

unread,
Jan 14, 2020, 4:43:07 PM1/14/20
to
That's a bit strong. It's still reasonable to use void* deep in the
implementation of templates for performance-critical stuff. My
clusterized EM simulator uses bare pointers in structs, because they
vectorize dramatically better, but again that's optimized innermost-loop
stuff.

For other things, std::shared_ptr, std::unique_ptr, std::weak_ptr, and
the standard containers are the bomb.

> There are test suites to find all potential memory leaks! There's no
> good excuse to have programs that leak resources anymore...

RAII is really good medicine. I used to like mudflap a lot, but it got
rolled up into GCC's sanitizers, which are super useful too.

Tom Gardner

unread,
Jan 14, 2020, 4:55:38 PM1/14/20
to
Some electronics/software people are in the position
that their products can people, even when they are
working as designed.

That /ought/ to colour their mentality and practices!

It is loading more messages.
0 new messages